Cybersecurity as a critical component of building ethical AI startups
Marcela Quintero Aguirre
In the context of Cybersecurity Awareness Month, Whale Seeker’s CEO Emily Charry Tissier and Troj.AI’s CEO James Stewart answer key questions that shed light on cybersecurity and the pivotal role it plays in growing AI startups sustainably.
What is cybersecurity and what are the usual threats AI companies face?
James: Cybersecurity is the protection of computer systems from theft and disruption. Increasingly those systems include AI models which introduce additional vulnerabilities like ‘data poisoning’ and ‘model evasion’ attacks, which malicious actors can use to influence model behaviors.
“Companies are under tremendous pressure to innovate and may not have the bandwidth or security skills on a small team.”
— James Stewart
What are common reasons startups fail to prioritize cybersecurity in their early stages?
James: Companies are under tremendous pressure to innovate and may not have the bandwidth or security skills on a small team. The problem is, as the AI space matures, there is an expectation to develop Trustworthy AI which encompasses several tenants like ‘fairness and bias’, ‘interpretability and explainability’, and ‘robustness and security’. All of these tenants must sit on a foundation of Ethical AI. We’re quickly moving from ‘nice-to-have’ to ‘must-have’.
Emily: When you are starting a company there are a million things to do and they all matter. It’s easy for less exciting issues to take the back burner, especially if you don’t know where to start or what cybersecurity means to you and your company. Whale Seeker is lucky to have a CTO who had this issue top-of-mind from the beginning which means we knew we were scaling in a secure way.
Can Ethical AI exist in the absence of cybersecurity measures?
Emily: To be blunt, no, without safeguards for data and algorithms we can’t be sure we adhere to our ethical AI standards. The concept of ethical AI doesn’t need cybersecurity to exist but when put into practice, we need to be 100% safe and secure so we control all aspects.
James: This is interesting. I view Ethical AI as the foundation of Trustworthy AI… and by its very nature must also include those other tenants that are expected within Trustworthy AI (i.e., considerations for ‘bias and fairness’, ‘interpretability and explainability’, and ‘robustness and security’). So while Ethical AI is foundational with its own requirements, part of that foundation must require a focus on the entire Trustworthy AI framework and tenants.
“By sticking firmly to our ethical mission in our AI development and deployment we create trust and transparency which is an important currency in the AI business”
— Emily Charry Tissier
Why is being an ethical AI company important to Whale Seeker?
Emily: Well, at the root of things it’s because it’s the right thing to do. Whale Seeker’s goals are broader than simply making money (although money is essential!) -- It’s important to our team that we create a better world for wildlife, the environment, and humans through our solutions. However, if that outcome is at the expense of unethical actions along the way such as outsourcing labeling to underpaid non-experts, AI that oversteps its intended uses, or is used to cause harm to humans or wildlife, we’ll feel as though we've failed. By sticking firmly to our ethical mission in our AI development and deployment we create trust and transparency which is an important currency in the AI business typically mired with “black box” jargon and growth-for-growth-sake priorities.
How does Troj.AI ensure Whale Seeker is protected?
James: The first step to protecting any AI is to formally assess and track model robustness over time. This helps avoid the situation where you might have an increase in model accuracy but also an increase in model brittleness to long-tailed edge cases encountered in the real world. Such a robustness assessment provides insights down to the class level so data scientists can understand where their model weaknesses lie and more surgically improve their models to protect against adversarial attacks, which can be both naturally occurring and malicious. The second step we look to help with is an automated data audit that can highlight out-of-distribution data which can be embedded, Trojan attacks, or even quality issues like noisy labels. The third step, and longer-term goal, is to provide an AI firewall that can highlight potential out-of-distribution inputs during deployment. Any such protections need to be end-to-end to cover the entire AI pipeline.
As a small business or startup with a limited budget, what options do companies have when it comes to cybersecurity solutions?
Emily: Password managers are a good way to start, also discussing within the founding team what standards you will follow. You can also research the next steps for cyber protection so you have a plan for scaling and once you have revenue.
James: There are many manageable best practices that small businesses can adopt to achieve good cyber hygiene. On the AI side, I would be cautious of data supply chains and really keep track of data provenance and lock that data down. The same applies if you’re incorporating off-the-shelf models as starting points. A must-have would be a third-party mechanism for assessing and tracking robustness for those long-tailed edge cases and that can be a very affordable differentiator.
Follow us on social media throughout the month of October to learn more about cybersecurity and Ethical AI.