Selected TED/TEDx talks on artificial intelligence

Artificial intelligence (AI) describes the cognitive function of machines through technology such as algorithms or other machine learning mechanisms. The very definition of AI places technological devices with this “artificial” knowledge in comparison to and opposition with humans possessing “natural” knowledge. This discipline within technology has been around for more than sixty years and in recent years, is gaining consistent enough momentum that many of its once outlandish ambitions – such as self-driving cars, for example – are current or imminent reality. As computing power advances exponentially and uses for and types of data are ever-growing, AI is becoming ubiquitous in the news of the newest and emerging technological innovations.

As AI sustains and draws on its now considerable basis of achievements to make even more advancements in research and development across many business sectors, ethical and existential dilemmas related to it become more prevalent as well. Returning to that initial dichotomy between artificial or machine intelligence and natural or human intelligence, the design ethics and morality of bestowing human-like thinking ability on devices and networks raise many philosophical questions. Certain uses of AI, such as for autonomous weapons, could even pose safety risks to humans if not developed and directed thoughtfully.

These questions can go on and on; practical ethics represents the attempt to navigate the broad social context of the workplace by reconciling professional rules with moral expectations and norms. This, again, is highly pertinent to a corporate compliance program, which seeks to encourage an business culture that respects legality, approaches business competitively yet thoughtfully, and also sets standards for employee and organizational integrity. It is imperative for compliance professionals to understand practical ethics and use dilemma sessions or open discussions with the businesses they advise in order to encourage a common comfort level with this sort of thinking throughout their organization.

The below TED/TEDx talks emphasize the connection between AI and human life, commonly invoking questions about bioethics, practical ethics, and morality.

  • Artificial intelligence: dream or nightmare? (Stefan Wess) – Stefan Wess, a computer scientist and entrepreneur, provides a helpful primer on the history and current state of artificial intelligence in the contemporary movement of machine education. Big Data, the Internet of Things, machine learning, speech recognition – all these technologies and AI-related topics are already part of daily life. But as this continues to develop, how will organizations and individuals interact with the technology? How should it best be controlled and is it even possible to do so? The many risk implications of AI must be considered as more advanced creations become stronger and closer to reality every day.

 

 

  • Can we build AI without losing control over it? (Sam Harris) – Neuroscientist and philosopher Sam Harris is well-known for his commentaries on the interaction of science, morality, and society. Advanced AI is no longer just theoretical stuff of science fiction and the very distant future. Superintelligent AI – completely autonomous, superhuman machines, devices, and networks – is very close to reality. Technologists, the organizations in which they work, and the communities for which they create must all be conscientious about the development of these technologies and the assessment of the risks they could pose. Contending with the potential problems that stem from creating this very advanced AI needs to be done now, in anticipation of the technology, not later – when it may no longer be possible to control what has been designed and brought to “life.”   Planning, careful control frameworks, and regulatory supervision that balances openly encouraging innovation with soberly considering safety and risk consequences are all necessary to conscientiously embark upon these amazing technological endeavors.

 

 

  • What happens when our computers get smarter than we are? (Nick Bostrom) – In the same vein as the previous talk, one of the consequences of extremely “smart” artificial intelligence is that machine learning could be just as smart as a human being’s knowledge – and then, of course, eventually overtake humans in intelligence. This is alarming because it suggests the potential that humans could introduce their own subservience or obsolescence via machines created to make machines smarter. Again, all participants in developing this technology, including the consumers to whom it is ultimately directed, need to consider their intentions in bestowing machines with thought and balance the various risks carefully. With the ability for independent thought may also come the capacity for judgment. Humans must make an effort to ensure the values of these smart machines are consistent with those of humanity, in order to safeguard the relevance and survival of human knowledge itself for the future.

 

 

  • The wonderful and terrifying implications of computers that can learn (Jeremy Howard) – The concept of deep learning enables humans to teach computers how to learn. Through this technique, computers can transform into vast stores of self-generating knowledge. Many people will likely be very surprised to learn how far along this technology is, empowering machines with abilities and knowledge that some might think is still within the realm of fantasy. Productivity gains in application of machine learning have the potential to be enormous as computers can be trained to invent, identify, and diagnose. Computers can learn through algorithms and their own compounding teaching to do so many tasks that will free humans to test the limits of current inventions and to extend human problem-solving far beyond where it already reaches. This is certain to change the face of human employment – already bots and androids are being used for assisting tasks in diverse fields from human resources recruiting to nursing patient care.   Again, the extension of these technologies must be carefully cultivated in order to neutralize the existential threats to human society and life that may be posed by unchecked autonomy of machines and artificial learning. The time to do this is now, as soon as possible – not once the machines already have these advanced capabilities with all the attendant risks.

 

 

  • What will future jobs look like? (Andrew McAfee) – Picking up on the theme of the changing nature of human employment as machines get smarter, Andrew McAfee draws on his academic and intellectual background as an economist to unpack what the impact on the labor market might be. The fear, of course, is that extremely human-like androids will take over the human workforce with their advanced machine intelligence, making humans mostly irrelevant and out of work. The more interesting discussion, however, is not whether androids will take away work from humans but how they may change the kinds of jobs that humans do. Considering and preparing for this reality, and educating both humans and machines accordingly, is imperative to do now.

 

 

Check back here in the future for continuing commentary on AI and its impact on human life and society, including technology and the ethics of knowledge acquisition, as well as more insights on specific AI innovations such as self-driving cars and machine learning.

Leave a Reply