Artificial intelligence is revolutionizing all aspects of our lives and work, both in the field of technological development and in the business world. It is a major trend that will change the way we work and live in the near future.
Data analytics allows AI to connect seemingly unrelated dots, delivering results and optimizing repetitive tasks, resulting in more efficient operations and services.
Interview with Christophe Bortolaso, Head of Research at Berger-Levrault, about the work in the field of artificial intelligence carried out by the Research and Technological Innovation Department.
For the past ten years, Berger-Levrault has invested in prospective work to cover several technological and societal issues of the 21stème century. This has resulted in a dozen research and innovation programs and nearly a hundred parallel projects. The areas of innovation of the Technological Research and Innovation Department can be divided into two categories:
It is indeed necessary to develop new methods and tools for software development to enable easier and faster development. This is accompanied by a broader transition where software moves from being a product to being a service that can be consumed, composed and adjusted on demand. The emergence of the cloud and SaaS solutions are the beginnings of this and it is this direction that we are pursuing. All these developments open the way to the possibility of hyper-customization of our solutions. Obviously, a lot of scientific and technical progress is still needed to fully achieve this goal, but we are clearly moving in this direction.
On the other hand, we are interested in what technology can bring in terms of use and new possibilities: Artificial Intelligence, Robotic Process Automation (RPA), bio-inspired algorithms, data visualization, digital twins will be at the heart of all developments tomorrow. It is very clear that recent technological advances will radically transform a large number of businesses. These technologies will enable us to process information with greater flexibility by being able to work on semantics and content without imposing any particular structure, format or standard. These advances are a real step forward that should eventually free users from many of the technical and repetitive tasks that have been imposed on them by digitalization.
The trends are towards hyper-automation, hyper-customization and hyper-interoperability.
In the short term, there is a very large number of services that will be assisted by automation. I insist on the term “assisted” because the human must always remain in the loop. The archetypal example is that of the reception desk that can be relieved with a chatbot (a conversational agent) to handle the most common and simple requests.
But the biggest changes will take place in the back office. It is in the administrative tasks, less visible to the general public, that hyper-automation will in the medium term considerably modify daily life. This will result in the automation of processes in their entirety and in the relief of very repetitive and tedious tasks.
At Berger-Levrault, we have conducted several experimental projects in this area. For example, during the support of the Spanish municipality of Cornellá de Llobregat, we have developed a particularly innovative solution using AI and semantic analysis to identify duplicate residents in the municipal register managed by the WinGT software distributed in Spain. We have developed an AI capable of building a semantic fingerprint of a database record to compare it with others. This is a typical case of manual and tedious activity that can be automated thanks to AI. Nobody likes to check the consistency of several hundred thousand records by hand, but intelligent algorithms can.
In the same way, thanks to image analysis and DeepLearning, it is now possible to extract information from standard documents such as invoices or PDF estimates and automatically transform them into a mandate or purchase order. To go even further, thanks to predictive analysis, we will be able to anticipate the consumption of local authority budgets in the future and provide extremely effective management tools to enable decision-makers to make the best choices in complete transparency.
The human element will always be present in the process, but in a management and supervisory role. But AI is not our only vector of innovation. We are also investigating the creation of analytical tools for decision makers. That’s where data visualization technologies come in. For example, in our citizen relationship management tools, we are trying to build data visualizations that highlight the processing flows of citizen requests within the community. This is a very interesting case where we try to highlight the performance of the workflows for processing different files, their times, bottlenecks, variations during the year, etc. In this particular case, there is no need for AI but rather for human intelligence. Here, the technology must graphically show decision-makers what is happening in the field to help them organize their services in the best possible way. Far beyond a simple dashboard, we seek to highlight, shape and color the data so that it is the most understandable by humans.
There are several levels of maturity and complexity in AI technologies.
There are uses of AI that have the potential to arrive very quickly in digital solutions. For example, the semantic detection of duplicate citizens in a database is a very applied, concrete use that offers very good results. In the same way, intelligent analysis of paper documents, tickets, ID cards could be applied to our products very quickly. There are many examples of this kind of application in the short term.
On the other hand, there are other applications that are much more complex and will require several more years of research and development. Typically, what we are trying to do in terms of predictive budget analysis for local authorities is particularly delicate. It is a particularly complex mathematical and statistical problem that will require a lot more work. In the same way, we are interested in anticipating population flows in order to predict the need for space in schools. Here again, this subject will require a lot of research work before it becomes a reality in products.
The objective of our work in AI is always the same: to facilitate the work of agents by automating very repetitive or tedious data processing tasks. Once this is established, the advantages appear obvious: time and efficiency gain.
The past years have seen a significant profusion of reflections at national and European levels, but still no clear and strict framework for AI. In April 2021, the European Commission proposed the first ever legal framework on AI. It proposes to apply an assessment of the level of risk potentially generated by these algorithms. In the first examples proposed by the European Commission, the risk is determined mainly by application case. It goes without saying that the risks generated by a surgical robot driven by an AI are much more sensitive than a chatbot that greets citizens on a portal.
Overall, these issues are not totally inherent to AI and remain applicable to any type of computer processing whether it relies on so-called AI technologies or on more traditional development methods. This is a long-standing problem that has always questioned the authorities about automation in critical systems whether they are related to transportation, military, health or social.
We will therefore see to what extent this regulation will be implemented country by country and we will of course follow it very closely. But it is important to remember that we (Berger-Levrault) did not wait for the European Union to explore these issues and the associated risks. This is evidenced by our long-standing work on trust and algorithmic ethics, which I mentioned earlier and which began in 2013.
There are still many challenges to overcome, but complex algorithmic processing such as AI systematically raises two major questions: Trust and Ethics.
On the one hand, we need to ensure that the algorithms we develop are reliable. Because while it may be transparent to the user, there is a huge difference between a button that performs a simple action like saving a file and a button that performs an AI-based processing. The former is predictable, it always performs the same action, it always produces the same result. Whereas the second one performs an action that depends on the context and on the data that the AI will have been fed. Therefore the expected result depends very strongly on the context of the client.
It is therefore necessary to work very early on to ensure that the processing carried out by these AIs can be explained so that the user remains in total control. This is a major issue of trust in digital technology. To anticipate these issues, the solution is to put the user at the center of all our considerations. The most powerful AI in the world must remain understandable, perceptible and controllable by a human.
On the other hand, some AI applications raise ethical issues that should not be neglected. Indeed, if AI is to accompany the human decision-making process, particular importance must be given to the way in which these learning mechanisms are constructed and more particularly to the data used to feed them. If AI learns by example, then it must be provided with examples that accurately represent the diversity and complexity of our world. This is a vast challenge that is at the heart of our research questions.