Aim and Scope
May 7, 2021 (updated May 12, 2021)
What will we be able to do with computers that learn, or computers that work at the very basic quantum foundation of our world? How will these new emerging technologies impact our life in the global village?
The exponential increase in computing power we have witnessed during the last decades has been accompanied by an even more explosive increase of the data available, to the point that making sense of this “ocean of data” could be the major driver of innovation in the years to come1. Artificial Intelligence (AI) is emerging as one of the most promising avenues to analyse and extract useful information from a large amount of data.
AI is not a new concept and it is present in many aspects of our life, from assisting radiologists to recognise cancer in medical imagery to helping particle physicists analyse complex particle collisions.
However, the recent developments Deep Learning, coupled with High Performance Computing (HPC), open completely new and futuristic perspectives. Will robot-scientists be able to master the entire chain from generating scientific hypotheses by mining academic literature, to performing the appropriate simulations and experiments, all the way up to publishing scientific results? Initial steps in this direction have already been taken by King et al. in genomics and tropical disease research. Can it go even further, filing patents on the path to commercialisation?
The industry has long harnessed AI for applications including robotised production lines, mobile device apps, autonomous vehicles, and many more. Will the AI of the future go even further and anticipate customer needs, design new products, and market them?
It is important to distinguish between weak AI and strong AI. Weak AI reproduces what it has learnt in a relatively restricted scope, can perform well within that scope, but still stays easily controllable by humans who program its scope and control the boundaries of its action. Strong AI, on the other hand, acts more like a brain, constantly adapting its scope to the external stimulii encountered. It can make connections between apparently unrelated concepts, and go beyond human cognitive biases. Strong AI can become much more powerful at problem solving, but could potentially escape human control and become hostile to humans.
Some specialists argue that resolving societal challenges such as defeating illness or even death, or solving climate and environmental issues, will need the power of strong AI. This will imply risk assessment and different scenarios to keep humans in control while leveraging the new opportunities.
These questions raise ethical and societal concerns, which need to be addressed in order to ensure transparency, accountability, governance by humans, security, safety, as well the right balance between privacy and transparency (‘as open as possible, as closed as necessary’). The overall context of digitalisation, open Science and open Innovation requires new thinking in terms of policies. This accelerating IT revolution is increasingly at the cross-road of all walks of science, technology, society, philosophy, and politics.
In order to harness successfully these new technologies, it is necessary to create spaces for multidisciplinary exchange between basic research, advanced IT providers, potential users in different fields, and political deciders.
It is in this context that we intend to organize the second international symposium that will address the following topics:
- from pattern recognition to experience driven decision making;
- Natural language processing, simultaneous translation;
- The teaching of ML and quantum computing at different levels;
- The evolution of requirements of computing resources in the different domains;
- The future of large data analytics;
- Collaborative systems and human-machine interactions;
- Algorithmic game theory and computational social choice;
- Quantum and classical algorithms for deep learning and AI;
- The perspectives beyond the current horizon;
Ethical and societal issues which could be addressed include:
- Transparency: to which extent is possible to explain and verify the algorithms;
- Choice assistance: can AI provide appropriate support for user choice?
- Governance and control: can AI powered systems be still controlled by humans?
- Security, robustness and dependability of AI networks;
- Safety: how to make sure that AI systems do not cause harm to humans or property;
- Privacy: how to ensure data protection in the AI world;
- Ethics and accountability: respect of human value and attribution of responsibility;
- Policy-regulated access to data – what data is accessible to machines, and under what licencing arrangements (data is a vital input to the process of machine learning)
- New policies and regulatory tools to be developed in order to address new needs related to digitalisation, open science, and open innovation
And it already is, see http://www.oecd.org/sti/ieconomy/data-driven-innovation.htm ↩︎