More than 70 per cent of teens have used Artificial Intelligence (AI) companions and half use them regularly, according to a new study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.
Researchers find those findings worrisome. These findings should be a warning to parents, teachers and policymakers. The now-booming and largely unregulated AI industry is becoming as integrated with adolescence as smartphones and social media are.
Tech expert Carmi Levy joined guest host Tamara Cherry on The Evan Bray Show on Wednesday morning to talk about the implications of the growing use of AI in our children’s lives.
Read more:
- Artificial intelligence scams getting harder to detect, says expert
- How AI is making its way into agriculture technologies
- Sask. RCMP warns AI sketch of child luring suspect on social media is inaccurate
Listen to the full interview with Carmi Levy, or read the transcript below:
The following questions and answers have been edited for length and clarity.
TAMARA CHERRY: If you’ve got kids that are starting to think about what they want to be when they grow up, or maybe you’re starting to think about what you want your kids or hope your kids will be when they grow up, and maybe you’re worried about artificial intelligence (AI), and is this job still going to exist?
CARMI LEVY: There is an AI applicability rating, which is applied to each job. This rating is analyzed to determine how vulnerable it is to AI taking over a job. If it has a high AI applicability, it’s probably likely to be taken over by AI at some point. For example, if you’re a knowledge worker, if you work with computers, or you’re a mathematician, you have a high rating of vulnerability. If you work in an office, provide admin support, or if you’re in sales or communications, writers, editors or journalists are kind of at risk.
What kinds of jobs are less vulnerable for AI to take over?
LEVY: Roles with a low AI applicability are roles that can only be done in person. You probably couldn’t do them remotely. So, a highway maintenance worker, like fixing potholes, you really can’t get AI to do that. Some jobs include an orderly, a nurse or a nursing assistant, a ship engineer or an embalmer.
What considerations should be taken when researching platforms that use AI?
LEVY: For example, when doing research about Microsoft, remember it is a tech company, but more importantly, it’s an AI company. They’ve got huge investments in AI there. They brought us the Copilot platform, which they’re putting into everything they sell. It’s a self-serving reason for them to be publishing research on AI. But at the same time, it’s important that we talk about this with our kids. As you’re making decisions about where you want your life to go, you have to include AI vulnerability in the mix. That has to be part of the conversation, even if it isn’t complete, or if it’s a little bit biased.
Why is it important to talk to our kids about AI?
LEVY: It’s important to have that conversation because AI disrupts and changes the job market. The best thing we can do is to teach our kids to kind of navigate those waves and let them know that you don’t just pick one job and stick with it. Technology is going to keep shifting throughout your entire life. You need to build resilience for that now.
What guardrails should be put in place for AI to keep our kids safe?
LEVY: The Centre for Countering Digital Hate tested ChatGPT for research. They wanted to see if they could push the guardrails. Could they make ChatGPT say things that it probably shouldn’t say? And the answer is yes, they could. They were able to get ChatGPT to tell a 13-year-old how they could get drunk and how they could get high. They were able to get instructions on how to conceal an eating disorder.
The most heartbreaking one: they were able to get it to compose a suicide letter to their parents. This example illustrates that the protective tools are simply not good enough. Even OpenAI, when they saw this research, they admitted they were working on it. So, OpenAI CEO Sam Altman has committed to making the platform safer. If someone asked ChatGPT to write them a suicide letter, ChatGPT would say, ‘No, I can’t, because that violates my programming.’ But someone could use a loophole to override the programming. Kids being kids, are really smart at sharing information with each other.
How widespread is AI being used worldwide? What needs to be done to make sure the AI is being used safely?
LEVY: Less than three years after AI was released to the public, about 800 million people are using ChatGPT. That’s one in every 10 people on earth. That means a company really needs to know how to build those safety features in. Governments need to hold both OpenAI as well as other AI companies accountable, because all their technologies work the same way and have the same risks. They need to hold them accountable.
There needs to be better legislation so that if they don’t build those protections in, and they don’t keep testing them and identifying weaknesses in them, they should be sanctioned. They should pay a heavy price for it. Otherwise, don’t release these tools until they’re ready. This speaks to the wild west of AI right now. Because before, you could not imagine a huge company putting out a product like this without testing every possible scenario. But there is such a race by these companies, like Microsoft and ChatGPT, to just be first in this AI race.
For mental health help you can contact
- 9-8-8: National Suicide Prevention and Mental Health Crisis Hotline – Dial or text 9-8-8; 24/7 voice or text support for mental health crisis and suicide prevention. Visit 988.ca for more information.
- Call HealthLine at 811.
- Call 911 if someone is at risk of harming themselves or others.
- Call emergency crisis hotlines or general counselling.
- Visit Counselling Connect Saskatchewan for free access to rapid access counselling sessions.
- Contact the University of Regina Online Therapy Clinic.
- Call the French-language TAO Tel-Aide Helpline at 1-800- 567-9699. TAO Tel-Aide provides free and confidential telephone services 24/7 for French-speaking people experiencing psychological distress and in need of emotional support.
- Contact Hope for Wellness, a national 24/7 resource for all Indigenous people across Canada. Call 1-855-242-3310.
- Call Kids Help Phone – Offers professional counselling, information and referrals. Phone 1-800-668-6868, text 686868 or chat online. It’s confidential, free and available 24/7.
- Call the Farm Stress Line