July 3, 2020

Responsibility in artificial intelligence

Emma Gillies

For some, artificial intelligence (AI) is a way to save the world; for others, it’s a black box they can’t trust. And for science fiction movie lovers, the term might conjure up imaginings of hackers, killer robots, and the end of humanity as we know it. At its core, though, AI simply refers to the branch of computer science that strives to understand and replicate intelligence in machines.

Like any significant progress in science and technology, AI promises substantial benefits. AI algorithms can monitor animal populations, detect tumours in lungs, reduce companies’ operational costs, make cities smarter, help us make management decisions, and model and plan for future scenarios.

As with any technology, where there are benefits, there are risks. For example, intelligent machines could disrupt the job market with excessive job automation, influence politics by spreading misinformation, exacerbate social inequalities, and violate our privacy. In the hands of the wrong person, an AI agent could be programmed to do something destructive. And while AI systems might not morph into conscious, human-killing robots like those in the Matrix or Terminator any time soon, they could very well start to diverge from the goals that humans had set in the first place, especially when the people implementing these systems don’t understand the risks and limited scope of AI.

For instance, feeding biased data into an AI algorithm or failing to consider all the ethical implications of a system could make a well-intended model very destructive. Amazon’s hiring AI tool discriminated against women because it had been trained on primarily male resumes; Microsoft used Twitter to train a chatbot called Tay whose statements soon turned racist and inflammatory; and recently, AI researchers spoke out against a predictive crime software that was built upon racially-biased facial recognition algorithms.

It is up to us humans to minimize such risks. This is where responsible AI comes in. How can we reap the benefits of AI while also weighing concerns about humanity itself, unemployment, wealth distribution, and legal and security issues? Responsible AI presents a framework for this, ensuring that AI technologies are used transparently, ethically, and accountably.

The Montreal Declaration for Responsible AI Development builds an ethical framework for AI development to ensure that everyone benefits from the AI revolution. The Declaration is meant to be used by organizations, individuals, and political representatives building AI systems or pondering their implications. There are ten principles for AI practitioners to follow:

• help increase the wellbeing of all sentient beings;

• respect people’s autonomy;

• protect privacy;

• foster solidarity between people;

• be subject to democratic scrutiny;

• contribute to a more equitable society;

• maintain social and cultural diversity;

• show caution and avoid adverse consequences;

• empower human beings rather than machines to make decisions; and

• ensure ecological sustainability.

In essence, as a solution for keeping AI in check, the Montreal Declaration proposes a recipe for making humans more accountable and responsible.

Whale Seeker abides by the Montreal Declaration and values respect, transparency, and ethical AI use. We keep our client information private, do not back our work being used for purposes such as whaling and illegal fishing, and support an inclusive workplace. We are also committed to creating explainable and interpretable machine learning for clients so that they can trust the decisions they make and understand the AI models they’re using, rather than rely on a black box model based on incorrect or biased data. We believe that AI can and should be used for good—and doing so requires responsibility and commitment on the part of AI practitioners like us.


Back to Blog Hub