New gatekeepers have emerged to fill the vacuum created by a media increasingly needing ‘likes’ for a codependent life-support system based on advertising revenue support. Such codependency is maladapted and objectivity is a price the media seem prepared to pay. The Bellingcat group is good place to look for accurate reporting. But how did we get to needing citizen journalists for our truths?
Some might say we got there in part because of weak leadership. That authoritarians are strongmen not strong leaders. That demagogues manage for self promotion and office retention rather than national interest and improvement.
Is it too late to considerer new paradigms of leadership? Should we be able to sanction elected people for dereliction of duties? Could a global team of elders assist in building a road-map towards equality?
Why are we not cooperating at international level? Warnings of pandemic were ignored for decades. Nationalism was preferred. The climate change problem continues to be ignored in favour of nationalist agendas. What will we do when robots become autonomous?
Isaac Asimov imagined a need for Three Laws as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
My point is that Asimov was imagining ahead and wrote these laws in 1942. Thinking ahead is a responsibility of global leaders. Yet most leaders are executives rather than strategists.
Empowerment is the newer concept for robotics. Empower the robots to make choices. The robots should maintain or improve human empowerment while selecting from the options detected by their algorithmic analyses. This essentially means being protective and supportive. We should immunise ourselves against the robots by making the harming of people by robots (at least) a crime against humanity.
Grievability, the association of sorrow or grief with damage to a person or object, could be the key. Linked with constructivism, the idea that the behaviour and the relations of states are products of human invention, responsibility and accountability might be more easily defined and measured.
I guess what I mean is that any nation or robot that causes harm is by the very act wilfully doing so. The legal trick would be to avoid inequality in grievability; do we grieve the loss of a king more than a pauper? The concept of equality in human rights suggests not. There are some countries that prevent their leaders killing leaders of other countries yet allow them prosecute a war that kill tens of thousands of civilians with impunity. Yet civilians would be as grievable as any leader to a robot.
Peter says
I like the idea of a “global team of elders” but how would we ensure that some of very ineffective political leaders in the world today don’t become part of this team? Also, with Asimov’s first law, does “injure” and “harm” include damage that can be caused with words? We already have “bots” unleashed on the internet that are causing injury and harm to our societies. It would be good if humans abided by Asimov’s first law as well.
Simon Robinson says
My closing line was intended to highlight the ironic implication that we’d really like our leaders to be as protective of humanity as the robots we will build. However we do it, we must build the robots to be more protective than the leaders else the leaders will use them as weapons of war.
I recall reading a military analysis many years ago of the efficacy of lasers as a force multiplier in war. Blinding the enemy or cities of civilians would choke the supply lines with more casualties than gassing did in the trenches of WW1.
Lasers may yet do exactly that. And robotics is already at war starting with killer drones.
Nelson Mandela thought a panel of elders might be useful and founded The Elders in 2007.
https://www.theelders.org/