Will AI, and Robotics Author the Future?


A Guiding Philosophy

A new paradigm is dawning; a future authored, not by humans, but by the momentum and possibilities of AI, Robotics, Biometrics, Biotechnology, and many other emerging technologies. This new horizon is not necessarily human friendly.

To deal with this, we need an outlook that is reasonable and philosophic, and which places emphasis on empathy with human need, and the human condition, at all times and on all occasions; a perspective that is founded in human values, and human agency. We need an outlook that is launched from the elevation of ‘all of humanity’; a perspective which is humanistic, and that looks to a human centred future.

Technology and False Narratives

A growing number of articles are appearing that acknowledge the significant global scale disruptive potential associated with AI, Machine Learning, Robotics, and related technologies. The impacts are wide ranging: there will be impacts upon global employment; privacy; economic disruption, including adverse impacts on taxation, and the potential collapse of economic sectors due to 3-D and 4-D printing; impacts to global security, as a consequence of the militarisation of AI, and robotics; there will be impacts on the psychological and social development of children; growing inequality, and wealth concentration.

Biometric and other Biotechnologies are raising moral uncertainty by creating new scenarios for which all ethical, legal, and social implications have not yet been exposed. 

Machine Learning and recent audio/visual manipulation technologies can create fake fingerprints, and Deep Fake video.

AI, Robotics, Robotic Process Automation, 3-D, 4-D printing, and other recent technologies will radically change the means of production, with global consequences. The means to production forms the foundation of social, economic, and political institutions and structures. Global change to the means of production will impact the institutions and structures of society, and economies.

These impacts will exacerbate social concerns: of wealth inequality; of the concentration, and abuses of power; and global employment instability.

Evangelical articles about AI, and associated technologies, project a vision of the future that is a false narrative. We should be wary of science and technology attitudes and forecasts that convey an over optimistic promotion of human rationality. Such excessive optimism is developing into an ideological zeal that has fuelled technochauvinism and its drive to author the future direction of society.

Technochauvinism, and its narrative, writes hubris, and oversimplification, into its vision of the future. The science and technology fiction of the future is antihistorical, and ignores the possibility of larger evolutionary forces that are unfolding. The technology view of the future has no meaningful grasp of the fundamentally disruptive nature of some of its developments and forecasts. Although, it has recently acknowledged the possibility of adverse impacts, it attempts to discount those impacts by putting forward overwhelming benefits as a compensation; that is, it has authored its own cost/benefit ledger, and unilaterally declared that the benefits outweigh the costs. However, there are mounting concerns over the growing inventory of significant impacts. 

These calls to embrace the benefits of AI, Machine Learning, Robotics, Biotechnology, etc., are being published at the same time that Biometric data collection, Big Data, Machine Learning, and other technologies are being deployed against the entire populations of China and India. That is, these articles build up a false narrative of the future, by ignoring the almost inevitable adverse realities of the development and deployment of these technologies, and by ignoring the underlying structural causes.

These narratives tell us that society must accept the risk transfer associated with the advance of AI, Machine Learning, Robotics, Biotechnology… All we can do is change, and disrupt society.

This type of narrative disempowers us. We are no longer the authors of our future, but the collateral damage associated with a technology driven future.

If we imagine a technology driven society where the development of social interaction, social consciousness, empathy, and emotional intelligence are abbreviated (already, these are frequently reported observations, to which Digital technology, including Social Media have been connected) then we should expect that technology will exhibit the biases of the cultural situation, and the affective limitations of technologists, and of algorithmic, and data driven approaches.

Do we want to give the manufacturer of Killer Robots the decision rights over when to use them? Does society need Killer Robots? 

Does society benefit by deferring an ethical decision to some technologist to code into a Machine Learning algorithm?

Again, many of the emerging technologies such as, Robotics, and 3-D Printing can potentially have significant adverse impacts on social and economic structures. 

Recent Biotechnology developments are creating situations with significant legal, and ethical consequences, which will require some time to work through.

Imperial College London have identified over 100 disruptive technologies. 

The greater human good was not the driving force behind robots, Biometric Recognition, Robotic Process Automation, and 3-D printing.The abuse of AI, Big Data, and Biometric recognition technologies in India, China, and the West are only possible through the active participation of technology individuals, platforms, and technology companies.

Claims of benefits to society by science and technology, in time, are often only relative benefits; they are not absolute advances, but frequently involve significant trade-offs, and side-effects. If there is a benefit that appears to be a genuine advance it may be due to a fortuitous convergence of interests.

The Need for Proactive Governance

It is clear, we need effective, global, governance of technology development, and deployment.

While technology needs to be part of the governance process, the governance of technology must be global, and independent.

Reactive governance which attempts to deal with a technology, or situation, on a case by case basis, will likely be overwhelmed, or forced into compromise deals; and will not effectively resolve the Risk Transference that is typically involved in the development, and deployment of these technologies, at scale.

In response, Professor Dagmar Monett (Professor of Computer Science, Berlin) replied; “…we might need pro-active governance in lots of situations. We are reacting in a post mortem way, after it might be even impossible in many cases to roll back what has been done or developed.”

Global proactive governance is the only effective way. But more, society needs to take control of its future. The situation we have now, is technochauvinism authoring the future of society. Technology developments are being pursued, and deployed as a fait accompli. Society should not have to decide the level of Risk transference it is prepared to accept in relation to the adverse impacts that eventually arise.