What Bridges Teach Us about Ethics of AI
--
How history of bridges informs us on the significance of ethical AI.
Today’s AI reminds me of the early days of building bridges. The very first bridges were arched, made of masonry, stone or wood as those were the most plentiful material at that time. The builders tried different designs and learned from trial and error that arch bridges worked well, partly because of the insight on how their shape distributed the load to abutments. The actual theory behind such bridges was not known for a long time but the architects and engineers kept on making incremental innovations to arched bridges. Once the underlying science improved and stronger, more cost-efficient materials were available, bridge construction was transformed.
So the first lesson is about engineering solutions even if the full working is unknown — it has its benefits and problems. Many of the early bridges failed because of flooding or heavy load. Some bridges were over-engineered to compensate for the lack of understanding. The early bridge structures were not very efficient. But still, many of these bridges worked and a few masterpieces are still around.
However the more interesting lesson comes from the wider implications of early bridge construction. For example, consider the history of the old London bridge, as narrated in this video. Conceived with good intentions, there were many problems with the bridge. It was crowded, there was overloading because of building of too many shops on the bridge, there was pollution discharge to the river and fires. All of these challenges forced re-design of the bridge which took a while and used a lot of resources.
Today’s AI is not very different. We are rushing to implement AI without a solid understanding of how they work and how their implementation cause problems. Many of today’s AI are engineered solutions based on data derived from systems that were not always intended to be used for AI, inaccurate science and flawed algorithms (for instance, to predict human emotions). No wonder we are witnessing more and more stories about use of personal data without permission and biased AI and ML. The reality is probably far worse as most of the AI solutions are shrouded in secrecy and not accessible to public scrutiny.
We should pause and learn from history. AI can be a wonderful technology if we are human-centric and thoughtful about the social and ethical aspects of AI. Unless we correct our course we run the risk of causing harm to many and losing trust of the public.