Artificial Intelligence can be big for non-profit sector: Jodie Sangster
Artificial Intelligence is the new buzzword in the technology space. But how much of that is really AI? We speak to Jodie Sangster, CMO Liaison Lead at IBM Watson, on several aspects related to Watson AI and AI in general.Updated: Sep 25, 2018 21:28 IST
IBM’s Watson has become a common reference point for any conversation around Artificial Intelligence (AI).
Since its debut in 2010, the Watson AI has made its way to multiple industries, ranging from education, customer engagement, marketing to healthcare. The AI has also seen innovative implementations, say creating movie trailers or partnering with Vogue to launch AI-driven dresses.
IBM has built an entire ecosystem around Watson AI which is now easily available as an API to be implemented in any service or platform. The concept of Watson remains the same as any other AI out there — use machine intelligence and data analysis to improve productivity and efficiency of businesses.
We spoke to Jodie Sangster, CMO Liaison Lead - IBM Watson, at the HT Brand Studio Live on Tuesday. Jodie also shared her views on the growing misuse of ‘Artificial Intelligence’ and how to distinguish between a real AI and fake AI. Here are the edited excerpts of the interaction.
How has been the adoption of Watson AI over the years?
It’s still early days of adoption. A lot of companies have invested in the technology. So, the technology is available. And now, it’s about leveraging the available technology to get the next outcome. We are still in the early stages. In marketing, it has been just two years of embracing AI and in the next 18 months, we are going to start to leverage that and drive the learnings from it.
A lot of companies are using the term “AI” for almost everything. How do end users distinguish between a real AI and non-AI?
AI is becoming a marketing buzzword. So, everyone’s claiming to have an AI. The best way to explain is the true AI has the learning capabilities. It must have the ability to ingest the information and find the patterns and learn from those patterns and teach itself. If a solution does not have these traits, it’s most probably not an AI, it’s just a technology working for you. The true difference between the two is that one continuously learns.
Watson AI has been implemented in multiple industries/sectors. What more potential and adoption areas do you foresee?
There was a very good Accenture report that highlights how AI can be used in different business sectors. The report listed as many as 20 sectors for potential AI use. AI will play a role in each and every industry, just that it depends on how quickly they adopt. Health, finance, and education are some of the sectors that are leading AI adoption. Soon, traditional sectors like telecom and utility are now starting to look at artificial intelligence.
Are there any left on the table? Absolutely, yes. One of the areas that are yet to embrace AI is the non for profit sector. Think of the value AI can have in the non for profit sector. It’s going to be enormous. There are still many, many sectors which haven’t joined the bandwagon.
What’s your take on bad AI? How does IBM ensure Watson AI isn’t corrupted?
First thing is AI is trained. It does not sit on its own and it’s not completely independent. It’s always monitored. So, whether it is a corporation or a government, efforts are being made to ensure AI doesn’t go off course. And that has been the learning from very early, early adoption where AIs were left to learn on own, and had to be later pulled out. I think there are guards around the AI and companies are much more aware now.
There are four key components to make sure that AI is used for good. First is the government brief that AI or data in general should be used responsibly. Second is the companies are contingent on doing the right thing with the data. Third is us individuals. We as individuals have to be aware about what are we dealing with and how the two are being used. And the fourth is the technology itself is self-monitoring. It will flag when something is not in line.
People are now concerned about their data and possible misuse. Some don’t want their data to be analysed at all. AI, on the other hand, relies on data. So, what happens in that case?
It is an interesting one. There are two things developing in the parallel, (especially) if you look at the law how it’s evolving. Recently, EU introduced GDPR internationally and that’s a new standard on how data can be collected, used and that consent should be provided by user. At the same time, you have technology evolving as well. The two are intrinsically linked. Consumers have already been given control on their data. And they’re right by choosing if they don’t want their data to be collected or automated decisions should be made on their behalf. Following this, technology companies like IBM had to reconfigure how the technology is put together and how to take account of that. And that’s going to continue. As everything evolves, law will have to evolve and the two will have to sit hand in hand.