AI, Robots and Humanity: What it Means to be a Robot in the Age of Humans (20th March 2018)

A Panel Discussion and Debate Around Developing our Moral Code and Societal Ethical Understanding

We are bombarded with an oft-dystopian description of humanity’s future driven by the dangers of artificial intelligence. From sci-fi old-and-new to our most eminent scientists and entrepreneurs we have been brainwashed towards a modern-day Luddite mentality around all-things-automated and digitally controlled rushing us to mankind’s downfall.

Whether it be the so-called “ethics of AI” relating to transparency, privacy, personal data, communications, news and myriad more, AI technologies are already being used in applications from call centres to self-driving and autonomous vehicles, robotic surgeons, automated hedge funds, infrastructure control and weapons systems involving life-or-death decisions normally made by humans.

AI and machine learning is right now impacting on people; there are inherent legal or ethical consequences when it is used to automate decisions in areas such as healthcare, insurance, borrowing money, recruitment, news feed, political persuasion and policing to name a few areas where it’s being used on a minute-by-minute basis.

Political, business, education and social leaders are only now considering the very complex challenges of using AI in a way that is ethical and sustainable; our understanding of this is only just being developed and a critical challenge in considering what “ethical AI” is needs a definition of ethics as a whole, understanding our humanity and Developing our Moral code and Societal Ethical Understanding so we better understand what it means to be a human in an age of AI.

Perhaps in our ethics development we should also consider a time when we need to ask what it means to be an AI-driven robot in the human age?

About our Expert Panellists:

Prof Maja Pantic

Maja Pantic is a Professor of Affective and Behavioural Computing and leader of the i·BUG group, working on machine analysis of human non-verbal behaviour and its applications to human-computer, human-robot, and computer-mediated human-human interaction at Imperial College London and the University of Twente, Netherlands.
Prof. Pantic has published more than 250 technical papers in the areas of machine analysis of facial expressions, machine analysis of human body gestures, audio-visual analysis of emotions and social signals, and human-centred machine interfaces. She has more than 20,000 citations to her work, and has served as the Key Note Speaker, Chair and Co-Chair, and an organization / programme committee member at numerous conferences in her areas of expertise.

Catalina Butnaru

Cat is an IEEE Committee Member for Ethics in Actions, whose work is focused on raising awareness around the Ethically Aligned Design framework, and contributed to re-defining Wellbeing in the future of work. She is a City AI London Ambassador, democratising and sharing knowledge about applications of AI, and a Women in AI Ambassador for London. and an IEEE working groups. She is currently developing an ethical-by-design process for integrating AI in software development.
She studied Change Management at European Technology Centre, Entrepreneurship and Innovation at Stanford ‘Ignite’, Psychology at Al.I.Cuza, and Marketing Strategy at Chartered Institute of Marketing.

Kriti Sharma

Kriti is an Artificial Intelligence technologist and a leading global voice on AI ethics and its impact on society. In addition to advising global software companies on AI, she focuses on AI for Social Good. She built her first robot at the age of 15 in India and has been building AI technologies to solve global issues ever since, from productivity to education to domestic violence. Kriti was recently named in the Forbes 30 Under 30 list for advancements in AI and was included in the Recode 100 list of key influencers in technology in 2017 alongside Elon Musk, Jeff Bezos and Mark Zuckerberg. She was elected as a Civic Leader by the Obama Foundation for her work in ethical technology. She is a Fellow of the Royal Society of Arts, Google Grace Hopper Scholar and recently advised the UK Parliament in the House of Lords on AI Policy. Kriti frequently writes about her views on the ethics of AI in global media such as Fortune, BBC, Harvard Business Review, The Times, Financial Times and TechCrunch.

Will Heaven (Chair)

Will Douglas Heaven is a freelance writer and editor. He is a consultant for New Scientist and editor of the New Scientist Instant Expert book on artificial intelligence. He was previously chief technology editor at New Scientist and founding editor of the BBC’s tech-meets-geopolitics website Future Now. He has a PhD in computer science from Imperial College London and knows what it’s like to work with robots. You can find him on Twitter: @strwbilly.


Cybersecurity: What’s Real, What’s Not and What’s Next? (20th Feburary 2018)

Over the past decade or so we have become increasingly dependent on technology in our daily lives; this has opened us up to a much foreseen and somewhat dystopian threat – that of ‘cybersecurity’.

While in the late 1990s and early 2000s cybersecurity only seemed an issue for your company’s IT team, today it’s a multi-billion pound global industry that is expected to top £1 trillion by 2022.

Whether it’s an email scam targeted at individuals, a corporate data theft affecting millions of people at one time or a DDoS attack, the rise in cyberattacks and their increasing reach has made cybersecurity a focus of everyone’s attention to the point where we’re no longer so worried about someone stealing our wallet but stealing our entire digitised life.

Every day one hears of moral panics in business and outrage in society about ‘cybersecurity’. This talk will describe in outline what the real issues are – and why they’re real – and address some of the persistent myths around the subject.

Our Speaker will speculate on likely developments in the field – in terms of emergent technologies and their accompanying risks – and on the likely evolution of organisations, from commercial enterprises to national governments and individual consumers, as they move to mitigate these new risks.

Book here to hear our speaker – Henrik Kiertzner – give examples of cybersecurity developments, realities, truths and myths and shed some light on the evolution, challenges and solutions that will arise.

About our Speaker: Henrik Kiertzner

Henrik Kiertzner served in the British Army worldwide for many years, as a linguist and intelligence specialist.

Since leaving the Army in 2000, he has been, variously, IT Director of an international engineering consultancy, a security and risk consultant in both real-world and cyber domains and now makes a living discussing and delivering analytics and big data solutions to cybersecurity challenges throughout EMEA.

Among his proudest achievements are co-authorship of the security strategy for the London Olympic Park, authorship of a national border security strategy for the last-but-two government of a now failed state and the specification and delivery of a security architecture supporting a NATO nation’s newly-deployed battlefield management system.

Henrik prides himself on using his linguistic skills to interpret between suit and t-shirt. He is a Fellow of the British Computer Society, a Chartered Information Technology Professional, a Member of the Institution of Engineering and Technology and holds a valid Cycling Proficiency Certificate.


Beyond the AI Hype: Or is that just a Chatbot winding us up? (21 November 2017)

Globally-renowned scientists and entrepreneurs have warned of the immensity and immediacy of threat from AI. Prof Stephen Hawking said in 2014 “The development of full artificial intelligence could spell the end of the human race.” But is this a real concern or hyperbole?

Since that first alarming statement, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that “society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential ‘pitfalls’: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which cannot be controlled.”

Continue reading