6. 12. 2022 17:00 Radim Polčák: Regulating AI: Compliance, Liability, Rights to Data and the Search for Ponder Stibbons
There has been recently a significant shift in the general approach of the EU to the regulation of development and deployment of AI-based products and services. While the Juncker Commission preferred economic tools, the von der Leyen Commission chose to react by a range of legislative instruments. Besides the specifically aimed drafts of the AI Act, AI Liability Directive or Cyber-Resilience Act, the field of AI is (or shortly becomes) subject to rules in Digital Services Act, Digital Markets Act, Digital Markets Directive, NIS Directive, Cybersecurity Act, draft Data Act, Data Governance Act, draft Production and Preservation Orders Directive, draft CoE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, OECD AI Principles, UNESCO Recommendation on the Ethics of Artificial Intelligence, etc.
However messy the above matrix of legislative and policy instruments may seem, their regulatory impact can be narrowed down to the following three main areas of interest of those who engage in research, development and deployment of AI-based products and services: 1. ex ante regulation (compliance), 2. ex post regulation (liability) and 3. regulation covering data that are used for machine-learning or other constructive operations by AI-based systems.
The note will focus on currently most relevant issues in the above three categories. We will particularly discuss performance-based rules covering high-risk systems (incl. recently heated debate on general-purpose AI systems), accountability and standards of proof in liability cases (incl. consequent impact on AI industries, namely with regards to insurance and re-insurance models), and tools that may help in mitigating complexity and hardship in making data available for AI research and development, while preserving the unprecedented standard of protection of fundamental rights in the EU.
The adequate legal method for regulating AI is pragmatic and any regulatory approach must always consider institutional backing that translates legal and subsequent (ethical, economic, etc.) rules into the actual practice. The reference to the Prachett’s wizard in the title thus indicates that the note will also focus on challenges that accompany attempts to establish a functional institutional backing of all above rules and regulations – both on the level of the EU and the member states.
Radim Polčák is the Vice-Rector of Masaryk University and Head of the Institute of Law and Technology of the Faculty of Law of MU. He is a founding fellow of the European Law Institute and the European Academy of Law and ICT, a panellist at the ADR.eu Tribunal and the head of ILT observer delegations at UNCITRAL and UNODC. He is the General Chair of the Cyberspace conference (cyberspace.muni.cz), the founder of the Masaryk University Journal of Law and Technology (mujlt.law.muni.cz) and the Review of Law and Technology (in Czech – revue.law.muni.cz). He has contributed significantly to the development of legal regulatory framework of Czech and European cybersecurity and in 2017 and 2018 served as a Special Advisor to the European Commission for robotics and data protection. He is a national expert for the Czech Republic in the Global Partnership for AI (GPAI), a member of the Legislative Council of the Czech Government, a member of the Czech Digital Team and a member of the NCISA Appellate Committee. Radim authored or co-authored over 150 scientific papers, books and articles namely on topics related to cyberlaw and legal philosophy.
17. 10. 2022 14:00 Mireia Diez Sánchez: Speaker diarization: automatically finding "who speaks when" in an audio conversation
Speaker diarization is the task of automatically determining the speaker turns in a recording of a conversation or, as commonly stated, finding "who spoke when". Although seemingly easy for humans, speaker diarization remains a very challenging task in the automatic speech processing field. Speaker diarization deals not only with voice activity detection (VAD) and the complex speaker recognition stage but also faces the problem of having an unknown number of speakers in utterances, segmentation of speech into speaker turns (finding boundaries between speakers), and treatment of overlapped speech (cross-talk). In this talk, we will go through the evolution of diarization systems: we will first describe the early approaches, which considered a cascade of subtasks (e.g. VAD, finding homogeneous speaker regions and clustering, etc.). We will then focus on the neural network-based state-of-the-art methods, such as end-to-end diarization and target-speaker VAD systems, as well as the main current challenges.
Dr. Mireia Diez Sánchez is a researcher at the Speech@FIT group at Brno University of Technology. Mireia received her Electronic Engineering degree in 2009, and her Ph.D. in 2015, both from the University of the Basque Country, Spain. Her thesis focused on the study of features for Language and Speaker recognition. In 2016 she obtained an individual Marie Curie fellowship for the SpeakerDICE project dealing with diarization tasks. She has attended several international workshops dedicated to the field of speaker recognition and diarization: Bosaris (Brno, 2012), ASRWIS (South Africa, 2016), and SCALE (Baltimore, 2017). Recently, she has successfully coordinated the BUT team for the DIHARD challenges. Her research interests are mainly speaker diarization, speaker and language recognition, and Bayesian inference.
28. 6. 2022 17:00 Vladimír Malenovský: Spatial Voice Coding (Speaker representation in 3D audio space)
We've learned how to use mobile telephones for a myriad of tasks in our everyday life. However, the most basic task is still speech communication. Although modern smartphones already contain multiple microphones most speech codecs used today support only the encoding of monophonic signals. This limits the quality of experience for example in teleconferencing scenarios. With an array of microphones it should be possible to extract the information about the direction of arrival (DoA) of the dominant speaker in the room. With the proper technology a listener at the remote end could then perceive the voice coming from the direction as in the original scene. In this talk we will discuss the possibilities of spatial voice representation. We will learn about the ambisonics format as a universal surround sound format. Finally, we will see how the spatial sound can be decoded and rendered into both loudspeaker and binaural representation.
Vladimir Malenovsky is a senior researcher at VoiceAge Corporation in Montreal, Canada. He finished his master thesis at Alborg University (Denmark) in 2001 and received his Ph.D. degree at Brno University of Technology in 2005. He was a post-doctoral researcher at the University of Sherbrooke (Canada) in 2006-2008. His long-term research interests are in Speech and Audio Coding, Mobile Communications and Machine Learning. He is currently research assistant and partial lecturer at the Speech@FIT research group at Brno University of Technology. He was involved in the development of several international telecommunication standards such as ITU-T G.711.1, ITU-T G.718 or 3GPP EVS. He holds 6 US patents and 12 publications. He is a senior member of the IEEE.
31. 5. 2022 17:00 Zuzana Kukelova: Methods for Generating Efficient Algebraic Solvers for Computer Vision Problems
Many problems in computer vision, but also in other fields such as robotics, control design, and economics, can be formulated using systems of polynomial equations. For computer vision problems, general algorithms for solving polynomial systems cannot be efficiently applied. The reasons are twofold - computer vision and robotic applications usually require real-time solutions, or they often solve systems of polynomial equations for a huge number, sometimes even millions, of different instances. Several approaches based on algebraic geometry have been recently proposed for the design of very efficient algorithms (solvers) that solve specific classes of systems of polynomial equations. In this talk, we will briefly discuss the main idea of these methods, which use the structure of the system of polynomial equations representing a particular problem to design an efficient specific solver for this problem. We will also discuss several approaches for improving the efficiency of the final solvers. Finally, we will demonstrate the usefulness of these methods by presenting efficient solutions to several important computer vision problems.
Zuzana Kukelova is an assistant professor at the Czech Technical University in Prague (CTU). She received her PhD from CTU in 2013, and her Master’s in 2005 from Comenius University in Bratislava, Slovakia. She was a Postdoctoral Researcher at Microsoft Research Cambridge (2014-2016). Zuzana is an expert on solving minimal problems in 3D computer vision and methods for generating efficient solvers for systems of polynomial equations. She is the co-author of the first automatic generator of efficient polynomial equation solvers based on Gröbner bases. She has worked on absolute and relative camera pose estimation for (partially) uncalibrated and rolling shutter cameras. Her solvers are part of structure-from-motion, localization, and calibration systems. Zuzana has co-organized tutorials on minimal problems at ICCV’15 and CVPR’19, was an AC for 3DV’18, 3DV’19, ACCV’20, and CVPR’22, a program chair for 3DV’20, and is currently a general chair for 3DV’22.
26. 4. 2022 17:00 Matej Hoffmann: Biologically inspired robot body models and self-calibration
Typically, mechanical design specifications provide the basis for a robot model and kinematic and dynamic mappings are constructed and remain fixed during operation. However, there are many sources of inaccuracies (e.g., assembly process, mechanical elasticity, friction). Furthermore, with the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and CAD models provide a less accurate basis for the models. Humans, on the other hand, seamlessly control their complex bodies, adapt to growth or failures, and use tools. Exploiting multimodal sensory information plays a key part in these processes. In this talk, I will establish differences between body representations in the brain and robot body models and assess the possibilities for learning robot models in biologically inspired ways.
Matej Hoffmann completed the PhD degree and then served as Senior Research Associate at the Artificial Intelligence Laboratory, University of Zurich, Switzerland (Prof. Rolf Pfeifer, 2006–2013). In 2013 he joined the iCub Facility of the Italian Institute of Technology (Prof. Giorgio Metta), supported by a Marie Curie Intra-European Fellowship. In 2017, he joined the Department of Cybernetics, FEE, CTU, where he is currently serving as an Assistant Professor. His research interests are in humanoid, cognitive developmental, and collaborative robotics.
22. 3. 2022 17:00 Ondřej Dušek: Large neural language models for data-to-text generation
Current research state-of-the-art in automatic data-to-text generation, a major task in natural language generation, is dominated by large language models based on the Transformer neural network architecture. These models are capable of producing lifelike, natural texts; however, they are hard to control and often do not adhere to the input data, omitting important content or producing "hallucinated" text which is not grounded in the input data. In this talk, I will first show the basic operation principles of the large language models. I will then detail our experiments aiming at higher accuracy of generated text in two ways: (1) improving accuracy of the generating language models themselves, (2) automatically detecting errors in generated texts.
Ondřej Dušek is an assistant professor at the Institute of Formal and Applied Linguistics, Faculty of Mathematics and Physics, Charles University. His research is in the areas of natural language generation and dialogue systems; he specifically focuses on neural-networks-based approaches to these problems and their evaluation. Ondřej got his PhD in 2017 at Charles University. Between 2016 and 2018, he worked at the Interaction Lab at Heriot Watt University in Edinburgh, one of the leading groups in natural-language interaction with computers. There he co-organized the E2E NLG text generation challenge, and since then he has been involved in multiple international efforts around the evaluation of generated text. He recently obtained an ERC Starting Grant on developing new, fluent and accurate methods for language generation. The project will start in the coming months.
22. 2. 2022 Petr Schwarz: Voice biometry – current research and industrial efforts
Voice biometry is a technology that overperforms humans. We will present how modern voice biometry systems are built and how they are deployed. The key issues are how to collect data, what are the input features describing the human vocal tract, what machine learning techniques are used for modeling, how to train the models, and how to deliver the model to its user while keeping the best accuracy. We will also mention the current research challenges, most of which are common to other machine learning areas – new neural architectures and training paradigmata, robustness, unsupervised training and probabilistic representations of speaker embeddings.
Petr Schwarz [PhD, Brno University of Technology, 2009] is senior researcher in BUT Speech@FIT at the Faculty of Information Technology (FIT) of BUT. He has broad experience in speech technologies ranging from voice biometry, speech transcription, keyword spotting, to language identification. At BUT, Petr worked on many national, EU, and US research projects and many international technology evaluation campaigns like those organized by the U.S. National Institute of Standards and Technology (NIST). In 2006, Petr co-founded Phonexia, and served for several years as its CEO and CTO. Phonexia sells speech technologies to more than 60 countries. Currently, he is working on conversational AI technologies and security/defense applications of voice biometry.
18. 1. 2022 Torsten Sattler: Recent Developments in Visual Localization
Visual Localization is the problem of estimating from which position and orientation a given image was taken. Solving the localization problem is an important part of many advanced AI applications, including self-driving cars and other autonomous robots as well as augmented and virtual reality systems. In this talk, we will give an introduction to the problem and current state-of-the-art solutions, as well as advanced applications such as performance capture of human motion in large-scale scenes.
Torsten Sattler is a senior researcher at the Czech Institute of Informatics, Robotics and Cybernetics (CIIRC) at the Czech Technical University in Prague (CTU), where he is heading the Spatial Intelligence group. His work is in the intersection of 3D computer vision and machine learning, with the goal of making 3D computer vision algorithms such as 3D reconstruction and visual localization more robust and reliable through scene understanding while using 3D computer vision methods to train machine learning models.
Before joining CIIRC in July 2020, Torsten was an associated professor (tenured) in the Department of Electrical Engineering at Chalmers University of Technology. He joined Chalmers in January 2019 after 5 years, first as a PostDoc and then as a senior researcher, in Marc Pollefey's Computer Vision and Geometry Group in the Department of Computer Science at ETH Zurich. Between July 2016 and June 2018, Torsten was Marc's deputy, tasked with the day-to-day operation of the group, during his sabbatical. Torsten obtained a PhD from RWTH Aachen University under the supervision of Leif Kobbelt and Bastian Leibe.
Torsten is an ELLIS scholar and will be a program chair for ECCV 2024. He is also a co-organizer of the 3DGV Seminar series.