Artificial intelligence Archives - Ascend Magazine /ascend/tag/artificial-intelligence/ Fri, 30 Jan 2026 17:57:13 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Diagnosis ChatGPT /ascend/article/diagnosis-chatgpt/ Wed, 28 Jan 2026 14:27:58 +0000 /ascend/?post_type=article&p=632 Imagine going to your doctor and getting a medical diagnosis from a large language model (LLM) like ChatGPT. It may seem futuristic, but it is closer than most people realize. “This kind of agentic AI [artificial intelligence] is the next big thing,” says Associate Professor Vijay Mago, director of the DaTALab at 첥Ƶ and […]

The post Diagnosis ChatGPT appeared first on Ascend Magazine.

]]>
Imagine going to your doctor and getting a medical diagnosis from a large language model (LLM) like ChatGPT. It may seem futuristic, but it is closer than most people realize.

“This kind of agentic AI [artificial intelligence] is the next big thing,” says Associate Professor Vijay Mago, director of the DaTALab at 첥Ƶ and Chair of the School of Health Policy & Management, Faculty of Health.

“Right now, a clinician reviews test results such as CT scans, gathers the patient’s medical history, and uses that information to make a diagnosis or decide on treatment. In the future, AI agents will be capable of performing many of these tasks – analyzing data, integrating medical information, and supporting or even making clinical decisions. We’re heading toward a much more advanced stage of health-care intelligence.”

It is the stuff of science fiction and York researchers are on the cutting edge of helping to make it happen in a safe and ethical way. As Mago points out, LLMs are becoming increasingly sophisticated, and the emergence of agentic AI – systems capable of autonomous reasoning and decision making in health care – appears to be imminent.

As a type of LLM, these agentic AI models involve several agents working together to accomplish complex tasks, memorize and collect data, plan, reason and learn.

Mago is part of a project developing an AI-powered doctor’s assistant for patients with chest pain to enhance diagnostic support in First Nations communities in northern Ontario, as well as other rural areas.

“AI models are becoming more intelligent every day.”

If a patient presents with chest pain, a doctor would ask all sorts of questions, including medical history and symptoms, but Mago says the AI assistant could say: “‘Hey, doctor or nurse practitioner, you missed asking this question,’ or recommend care approaches or suggest an ambulance be called.” The ultimate goal is to improve diagnostic accuracy, patient outcomes and safety.

“For rural emergency departments, where there is limited access to critical care, these AI-based approaches can help alleviate a lot of pressure,” says Mago, a member of Connected Minds and the Centre for AI & Society at York.

The model, once complete, still needs to undergo testing in a clinical setting, but down the line these types of AI models could help manage and diagnose any number of ailments, including strokes or diabetes.

“There are some very exciting things happening right now in the field and a rush to leverage the potential of these systems to improve health care, which would eventually include treatment options and predicting disease progression and outcome,” says Mago, whose health related research has garnered some $3.5 million in funding, including from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council and the Public Health Agency of Canada (PHAC).

With such a surge in interest, Mago acknowledges, comes the necessity to ensure AI is unbiased, ethical and adding benefit, rather than harm, to patients.

“AI models are becoming more intelligent every day. The goal is to figure out how to infuse them with emotional intelligence and cognition, and to make sure they are safe,” says York Research Chair in Safe AI for Health Equity, Elham Dolatabadi, who recently received funding through the Canada Foundation for Innovation’s John R. Evans Leaders Fund to start the Health Equity and AI Lab (HEAL).

Elham Dolatabadi Photograph by Chris Robinson

Over the next five years, she will be part of a team building a human-AI complementary system that combines human brain power with cognitively robust and emotionally intelligent AI for use in health care and mental health.

She agrees agentic AI holds the promise of being a game changer for improved health care, which is why much of her current focus is on creating toolkits and pipelines to evaluate these systems before being deployed. These multi-agent models are more complicated to assess than non-agentic generative AI models where it is easier to see if the outputs align with expectations.

When dealing with several agents in a model, if one is biased, perhaps toward a certain demographic, it could throw off the accuracy and safety of the output, the diagnosis or prognosis.

“Hackers are another issue. They can attack one of the agents in the group or infuse a faulty agent into the system, which may corrupt how the system thinks, or push it into hallucinating. That’s something many will find surprising, but AI not only lies; it hallucinates,” she says. In both cases, the model outputs something that looks factual, but is not.

“Hallucination is very complicated to understand, but we are working on a dynamic pipeline for hallucination evaluation, as well as pipelines for AI agentic models in mental health, acute care and outpatient care. These are across different dimensions, clinical values, behavioural values or cognition, and emotional intelligence, so they align with human values,” says Dolatabadi of York’s Faculty of Health and a faculty affiliate of the Vector Institute.

There can be different agentic systems for each health-care application. What will work for mental health care will not work for acute care. “We also need to ensure the output is not something AI made up because it didn’t know the answer.”

Gender, ethnicity, race and skin colour can all affect accuracy. Anyone who falls outside of the average parameters is not always served well.

“That’s the risk,” says York Associate Professor of the School of Public Policy & Administration, Faculty of Liberal Arts & Professional Studies. He is cross-appointed to York’s Osgoode Hall Law School where he graduated with a PhD. “Different subpopulations are not always captured as well with AI models. The question becomes, as a doctor do you still deploy the model if you know that it’s going to catch 80 per cent of European diseases and 10 per cent of sub-Saharan African diseases?” It’s a question Dolatabadi ponders often as she develops evaluation tools, suggesting the doctor needs to be aware of any limitations so they can make adjustments.

Ian Stedman Photograph by Horst Herget

Despite the current shortcomings, Stedman too holds out great hope in the power of AI for the future of health care, specifically in how AI can help better leverage genome sequencing. As someone who lives with a rare genetic condition, he is a prime example of a person whose health data might not yet be captured by larger AI models, as is his daughter who inherited the same gene mutation.

It took 32 years of searching before he finally had a diagnosis and learned the name of his rare disease, which allowed him to access appropriate medicine. Stedman says that getting a diagnosis felt, at the time, like winning a lottery.

There are more than 8,000 known rare diseases that affect one in 12 Canadians, but only about five per cent of them have an effective therapy, with even fewer having access to available therapy. Stedman really is among the lucky.

In his vision of the future, he says, AI plays an outsized role in health-care systems. With its ability to speed up genomic data interpretation, unpack clues to rare disease diagnoses and generally help in the understanding of each individual’s needs better, he believes that getting AI right will be non-negotiable in moving toward the ideal of having personalized health-care systems.

“If you look at the power of genomics in the context of its ability to improve care, it comes from unlocking the data through genomics sequencing. If we can do that with big data analytics and in an everyday clinical setting, we can change health care,” he says. “The challenge is all the legal stuff, which is where my unique hat comes in.”

Stedman brings a particular set of skills, knowledge and experience as a patient to the realm of AI and health. He is on the executive of the Centre for AI & Society and Connected Minds at York and the Chair of the advisory board for the Canadian Institutes of Health Research’s (CIHR) Institute of Genetics.

“The potential of AI is bigger than people imagine.”

Ensuring rare disease health-care data and genome sequencing data is available across the country is a big part of the equation. That was the impetus for the (PCGL), funded by the CIHR and Genome Canada’s Canadian Precision Health Initiative, with the goal to sequence 100,000 genomes and deposit them in the PCGL. “We’ve done a lot of infrastructure work behind the scenes to build a secure, privacy-protected, properly governed repository,” says Stedman, a member of the PCGL leadership team.

He is confident the repository will lead to more equitable precision health and result in faster rare disease diagnoses. “AI, genetics, precision therapy, it all fits under the umbrella of – how do we build a personalized health-care system where data analytics has an actual bedside impact. When you walk into the ER or your family doctor’s office and data analytics has some impact on you getting quicker, better, more accurate care, people will understand the value of the infrastructure,” he says.

He was also instrumental in the creation of the Canadian Rare Disease Network, which got its start in 2023 with funding from One Child Every Child, a Canada First Research Excellence Fund grant hosted by the University of Calgary. It helps connect the country’s rare disease scientists and clinicians with patient expertise to advance care and research.

“I believe the rare disease patient’s experience will teach us how to create a personalized health-care system. If you build a system that treats every individual as an individual, you’ll create a system that cares for more. I think we’re going to find we’re all rare, even if we don’t call ourselves rare disease patients,” he says. “The potential of AI is bigger than people imagine.”

It is already exploding into so many aspects of the healthcare field, even if patients are unaware, adds Mago. In addition to his work on LLM-based doctor assistants, he is partnering with the Northern Ontario Academic Medicine Association to use LLMs for text simplification for things like medical summaries in highly technical research articles.

It is another way AI is removing geographical boundaries to medical knowledge, he says. “It’s making research a lot more understandable and accessible, not only to lay people, but also to medical practitioners.” The continuing challenge is ensuring the summaries are ethical and sensitive, including to Indigenous and Black communities. With an ill family member in India, Mago knows first-hand the value of these sorts of AI-assisted summaries to bridge the gap not only between layperson knowledge and medical jargon, but also between different countries.

He is also in the second stage of a project that monitors substance-related issues in real time using an LLM-based surveillance system to analyze social media, mainstream news items, including images and videos, and hospital reports across the country. As part of a larger team, the work could result in earlier intervention and more targeted healthcare action. It has already expanded into a multi-institutional collaboration with the Canadian Centre on Substance Use and Addiction and the Urban Data Centre at the University of Toronto, funded by PHAC Enhanced Surveillance for Chronic Disease Program.

“This is the time for us to embrace AI, especially in the medical domain because there is the promise of huge benefits,” says Mago, who is excited about the opening of York’s new School of Medicine saying it will help further accelerate research outputs.

He also believes Canada should design and develop its own AI technology and software rather than use technologies made elsewhere that are then customized for Canadians. It’s a matter of AI sovereignty, he says.

The post Diagnosis ChatGPT appeared first on Ascend Magazine.

]]>
Space robotics /ascend/article/space-robotics/ Wed, 28 Jan 2026 14:27:29 +0000 /ascend/?post_type=article&p=642 Space is infamously inhospitable to life, but what is less universally understood is that it is also inhospitable to many technologies. The same high-energy radiation exposure that poses health risks to astronauts from solar particle events also renders the chip in a smartphone, a technology used by billions of people on Earth every day, useless. […]

The post Space robotics appeared first on Ascend Magazine.

]]>
Space is infamously inhospitable to life, but what is less universally understood is that it is also inhospitable to many technologies. The same high-energy radiation exposure that poses health risks to astronauts from solar particle events also renders the chip in a smartphone, a technology used by billions of people on Earth every day, useless. And while some technologies simply cannot do the job, others are simply not trusted.

This is largely the current situation when it comes to artificial intelligence (AI) applications in space. Space missions are complex endeavours that involve travelling great distances, harsh environments and many unknown factors, and they cost hundreds of millions to billions of dollars from conception to completion. While there is an inherent level of risk that is undertaken to do the work, a certain level of conservatism is also necessary to ensure a mission’s success.

“AI works in a non-linear way. This is the advantage of AI, what makes it so powerful.”

While some labs at 첥Ƶ are creating technologies that are applied to actual space missions today, the work being done to develop AI applications is looking towards a future, exact date unknown, where AI robots will take over much of the labour of space exploration. In some cases, that technology is simultaneously being translated to applications on Earth, where there are far less environmental constraints, but the need for greater trust in AI is no less important, say the researchers at York’s Lassonde School of Engineering who are developing next-generation technologies.

Decades ago, ’s interest in space robotics was piqued after NASA launched the Hubble telescope, only to announce shortly after that the mirrors Hubble relied on to capture images of distant galaxies were flawed and all the images it captured were blurry. NASA looked into the feasibility of sending robots to fix the issue, but later abandoned this approach, instead sending astronauts. Yet, the idea of using space robots to service and fuel crafts intrigued Zhu and it remains an area of research interest to the present day. Now a professor at York and director of the Space Engineering Design Laboratory, he is developing robotics and AI future applications for MDA Space – Canada.

Zheng Hong Zhu

“AI works in a non-linear way. This is the advantage of AI, what makes it so powerful. But because of its unexplainable nature, because you cannot disentangle it, the space industry does not currently trust it, but is still very much interested in exploring the future of these powerful potentials,” he says.

Currently, explains Zhu, every time a satellite or craft is launched into space, an exact replica is created and left on Earth, so if something goes wrong, engineers can work with the replica to pinpoint the error and fix it. AI, in not showing its work, does not lend itself to such easy corrections.

While colleagues at MDA are attempting to create explainable AI to address this concern, Zhu is working on other pieces of the puzzle, often not involving the most powerful and complex AI technologies, but simpler ones that might be able to be adapted to space sooner. These include simulations to train AI vision for the low-level light conditions that exist in space when approaching a spacecraft; robotics and AI swarming technologies that involve several AI robots that are not the most powerful, but can work together to do more complicated tasks; and training AI robots on learning tasks like grip strength. Zhu is also looking at developing lightweight materials that can be used in space as radiation shields, as the sophisticated AI chips developed by companies like Nvidia cannot currently be used in space.

Michael Bazzocchi

Associate Professor , director of the Astronautics and Robotics Laboratory at York, says that while his field used to be focused on fairly traditional methods, that is beginning to change. “Previously, this work has been dominated by classical techniques, but I would say in the last 10 years or so, we’ve seen a huge expansion where there’s interest in how we can apply machine learning, reinforcement learning, deep learning, deep reinforcement learning and computer vision to these fields.”

While space does pose unique challenges in terms of the environment, Bazzocchi says that many of the technologies they develop for space can be applied to the benefit of humans on earth and vice versa.

“While they’re not the same problem, they have many similarities that allow us to apply related techniques.”

One example from his work is an exoskeletal suit designed for firefighters to reduce the amount of effort required while doing strenuous tasks. Part of this lab work requires motion capture to understand a firefighter’s movements, but also requires employing different optimizations and algorithms to understand how the device might reduce or increase muscle activation in ways that might be beneficial.

"If you could send out autonomous robots first, they could perhaps create the right conditions for the humans to follow.” 

In space, the challenge for astronauts is quite the opposite: zero gravity conditions lead to muscle atrophy and eventually bone loss. The same research and principles can create wearables that purposefully create more resistance for astronauts when executing basic tasks.

“It’s not artificial gravity, because it won’t bring them to the floor, but it will make their movements more difficult,” he says. “When they want to do a task, for example, and they have to flex their arm, there’s a motor that’s resisting the motion so that it is not as easy.”

While the possibilities are exciting, Bazzocchi says that in both scenarios, machine learning and AI are not yet trusted.

“When you’re dealing with humans, you want predictability, and very obvious control that’s not going to potentially do something that’s unexpected and lead to injury,” he says. “And the same thing goes for space, when dealing with these multi-million-dollar assets, you want a certain level of predictability and explainability if something goes wrong.”

Still, Bazzocchi thinks it won’t be long until AI plays a bigger role in space.

“There have been some applications of autonomy in space already, such as for time-sensitive operations where there are long time-delays or for doing data processing. So, for example, creating algorithms that evaluate Earth imagery to detect wildfires is very much already in play.”

Zhu says that one day AI technologies might develop to the point where they can pave the way for creating hospitable living conditions for humans in space.

“Elon Musk wants to send humans to Mars. I think in the short term, it’s very difficult because the astronauts would die, the radiation exposure would be too much. But if you could send out autonomous robots first, they could perhaps create the right conditions for the humans to follow,” says Zhu.

Still, how would we know that the robots, working together and autonomously settling Mars, would still act in the interest of the humans that sent them there?

That’s the billion-dollar space robotics question, and according to the researchers, one we don’t currently have an answer for.

The post Space robotics appeared first on Ascend Magazine.

]]>
Mitigating bias /ascend/article/mitigating-bias/ Wed, 28 Jan 2026 14:26:29 +0000 /ascend/?post_type=article&p=654 As artificial intelligence (AI) advances – particularly large language models (LLMs) which are increasingly integrated into social, governmental and economic systems – discriminatory stereotypes and biases persist. These prejudices reflect and reinforce historical and systemic inequalities embedded in massive datasets that models like OpenAI’s Generative Pre-trained Transformer (GPT) and Google’s Gemini learn from. 첥Ƶ […]

The post Mitigating bias appeared first on Ascend Magazine.

]]>
As artificial intelligence (AI) advances – particularly large language models (LLMs) which are increasingly integrated into social, governmental and economic systems – discriminatory stereotypes and biases persist. These prejudices reflect and reinforce historical and systemic inequalities embedded in massive datasets that models like OpenAI’s Generative Pre-trained Transformer (GPT) and Google’s Gemini learn from.

첥Ƶ researchers from across faculties are joining forces to develop frameworks to identify and mitigate biases in LLMs rooted in colonialism, racism and ableism.

Health informatics Professor Christo El Morr’s work spans a range of topics, from achieving accessible and inclusive AI to modelling and building bilingual and accessible knowledge infrastructures, and creating frameworks to address AI bias.

“Currently, AI operates as a tool of corporate and state control, reinforcing systems of exclusion and marginalization under the guise of progress,” says El Morr, who co-edited Beyond Tech Fixes: Towards an AI Future Where Disability Justice Thrives, published in October 2025. The book challenges the prevailing assumption that AI can be “fixed” by improving datasets, adding ethical guidelines or refining bias-detection algorithms.

Christo El Morr

Internationally, El Morr and his Faculty of Health collaborator Professor Vijay Mago recently convened philosophers, social scientists and AI researchers at a symposium in India to advance global research collaborations with support from York’s Global Research Excellence Seed Fund.

El Morr is involved in multiple equity-focused and LLM-related studies, partnering with colleagues at York, including long-time collaborator and Critical Disability Studies Professor Rachel da Silveira Gorman in the Faculty of Graduate Studies, and other Canadian and international universities.

As a principal investigator on several studies funded by Social Sciences and Humanities Research Council (SSHRC) grants, El Morr collaborates with Gorman on advancing AI and disability advocacy, accessibility for persons with disabilities and AI and equity.

“Across projects, we centre data sovereignty, community governance and decolonial design. This means long-term partnerships, ensuring consent over data use, and sharing power over how models are trained, evaluated and deployed,” says El Morr.

His most recent SSRHC-funded research project, Equity Artificial Intelligence: towards a framework to address AI bias, with Gorman, Faculty of Health Assistant Professor Elham Dolatabadi and Lassonde School of Engineering Assistant Professor , explores how AI can be reimagined through frameworks of equity, justice and liberation.

Seyyed-Kalantari also leads the study, Design of Benchmarks for Fairness and Bias Evaluation and De-Biasing of Natural Language Model to Incorporate User Diversity, focusing on addressing fairness issues in LLMs, which often favour majority groups due to biased training data.

Laleh Seyyed-Kalantari

A recipient of a Connected Minds seed grant, the project is in collaboration with the Vector Institute and aims to design domain-specific testing benchmarks to assess and score fairness across diverse dimensions such as race, gender, religion and social status.

“By focusing on linguistic bias, particularly in the context of sentiment analysis, my work aims to mitigate stereotypes and ensure more inclusive LLMs that better support marginalized groups, including Indigenous Peoples, racialized communities and those with disabilities,” says Seyyed-Kalantari, who leads York’s Responsible AI Lab and is co-director of the new Mitigating Dialect Bias solutions network, which received $700,000 in Canadian Institute for Advanced Research funding.

She sees cultural bias arising from misinterpretation of dialects as a major concern in LLMs. “For example, African American Vernacular English often uses grammar, vocabulary and expressions that are not part of Standard American English. AI may interpret such words and phrases as ‘toxic’ and harmful. This is because LLMs are trained on data that favour dominant dialects.”

This is an issue that affects dialects around the world, something Seyyed-Kalantari plans to address next.

The post Mitigating bias appeared first on Ascend Magazine.

]]>
Ethical AI /ascend/article/ethical-ai/ Wed, 28 Jan 2026 14:26:17 +0000 /ascend/?post_type=article&p=657 There is a well-thumbed novel on Pina D’Agostino’s bookshelf that keeps her up at night, but also propels her forward. It is Nobel Prize-winning writer Kazuo Ishiguro’s futuristic fiction Klara and the Sun, in which an anxious generation of parents buy humanoid robots to act as “artificial friends” for their lonely kids. The more privileged […]

The post Ethical AI appeared first on Ascend Magazine.

]]>
There is a well-thumbed novel on Pina D’Agostino’s bookshelf that keeps her up at night, but also propels her forward.

It is Nobel Prize-winning writer Kazuo Ishiguro’s futuristic fiction Klara and the Sun, in which an anxious generation of parents buy humanoid robots to act as “artificial friends” for their lonely kids. The more privileged parents also leverage genetic enhancement technology to make their children smarter.

“That is my nightmare, and I want to make sure we don’t get to that world,” says D’Agostino, who is the scientific director of Connected Minds, the massive 첥Ƶ-led, next-generation research project that seeks to understand and predict the opportunities and risks to society associated with advancing technology.

“That is why Connected Minds and its work is important. We’re trying to get ahead of the development aspects of technology and advance socially responsible technologies to leave the world a better place – one where we are connected to each other in a human way, not disconnected.”

D’Agostino says Connected Minds is more than a research program; it is a movement toward a future where technology and human well-being evolve together in a more socially beneficial way. With Canada providing $105.7 million in funding toward the $318-million transformational project, creating a new national Ministry of Artificial Intelligence and Data Innovation and putting in place a voluntary code of conduct for technological advances, D’Agostino views the country as a world leader in this type of responsible AI. She expects legislation will come next.

A key player with Connected Minds since before its launch, D’Agostino’s background makes her a natural to lead it. She is a professor at Osgoode Hall Law School and the Tier 1 York Research Chair in Intellectual Property, Artificial Intelligence and Emerging Technologies. She is also founder and director of the IP Innovation Clinic, the country’s largest intellectual property clinic, helping York faculty and researchers as well as the broader community.

"My focus is on how the law and interdisciplinary collaboration can help ensure that no one is left behind."

In the spring of 2025, D’Agostino was appointed as York’s associate vice-president of research, and in October as the Chair of the Board of the Ontario Centre for Innovation.

Connected Minds is already making an impact, and D’Agostino takes particular pride in its interdisciplinary nature. The program brings together universities and industries, hospitals and policy-makers, artists and Indigenous communities, and is engaging more than 50 community partners and research collaborators over seven years to create a responsible and inclusive approach to technological development.

“When we cross over and we work together, great things happen. There is an array of different projects that we are working on, confronting how technologies interplay with human behaviour and ensuring that the goals and outcomes are going to really change the world in a positive way,” says D’Agostino, recognized by Canadian Lawyer magazine as one of the Top 25 Most Influential Lawyers in 2022.

“My DNA has always been one rooted in, I would say, social justice and fairness, and an evidence-based approach to everything I do. I’ve always been fascinated by technology, and how throughout history new forms bring promise, but also challenge different communities. My focus is on how the law and interdisciplinary collaboration can help ensure that no one is left behind.

“We need to get ahead of these challenges and have appropriate governance frameworks in place to ensure we are strong as a society.”

While she’s got her eye on those big picture issues, D’Agostino is also concerned about the impact of technology on her own four children, including her triplets.

“Kids and the next generation, that’s what I come back to,” she says. “Being a mother of four little ones, I see a new generation and a new way of seeing technology. They’re early adopters of it. For better or for worse, the kids are inheriting the world we’re leaving behind for them.”

The post Ethical AI appeared first on Ascend Magazine.

]]>
Shaping governance /ascend/article/shaping-governance/ Wed, 28 Jan 2026 14:25:33 +0000 /ascend/?post_type=article&p=681 In today's interconnected world, global cooperation is crucial for addressing cross-border challenges. It is evident in progress made on climate change, health crises and economic development, often as a result of international frameworks built on shared resources, common rules and mutual support. 첥Ƶ researchers are calling for such a coordinated global action to govern […]

The post Shaping governance appeared first on Ascend Magazine.

]]>
In today's interconnected world, global cooperation is crucial for addressing cross-border challenges. It is evident in progress made on climate change, health crises and economic development, often as a result of international frameworks built on shared resources, common rules and mutual support.

첥Ƶ researchers are calling for such a coordinated global action to govern artificial intelligence (AI). They say establishing effective frameworks to prevent the intensified risks of digital colonization, deepening inequality, intellectual property (IP) violations and workforce exploitation is crucial.

Associate Professor and Assistant Professor i, both in the Faculty of Liberal Arts & Professional Studies, are leading key research and academic initiatives related to AI governance, supported by significant funding for their groundbreaking research in emerging areas.

Teshager Dagne photography by Horst Herget

Dagne, the Ontario Research Chair in Governing Artificial Intelligence in the School of Public Policy & Administration at York’s Markham Campus, has received funding for three collaborative research projects – as co-lead or principal investigator – from the International Development Research Council, Social Sciences and Humanities Research Council (SSHRC) Partnership Grants and French Development Agency.

His research explores the role of existing IP laws in shaping AI innovation and highlights how global legal frameworks often fail to account for diverse realities, particularly in the Global South.

“Most of the impact of IP rules on AI relates to whether there have been copyright violations during the training of AI and the development of its datasets, and whether AI-generated content is protected under IP law,” Dagne explains. “In the context of the Global South, it’s far more complex, with
issues of fairness, access, ownership and cultural rights adding important layers to the discussion.”

His current research focuses on African startups and innovators who face barriers in accessing open data and navigating IP frameworks. “This leaves African developers vulnerable to exploitation, as their data and innovations are often used without proper compensation or recognition.”

"In the context of the Global South, it’s far more complex, with issues of fairness, access, ownership and cultural rights adding important layers to the discussion."

He adds that current global rules reflect the legal traditions and economic priorities of countries in the Global North. “These laws, focused on property rights, often don’t align with how innovation occurs in the Global South, where knowledge-sharing tends to be informal, non-proprietary and rooted in open, collaborative practices.”

Li, whose expertise lies in the geopolitical dynamics of AI governance, points out that academic research capacity in the Global South is also limited, affecting how AI-related policies and laws are developed.

“AI governance research remains dominated by high-income Western nations and elite institutions,” says Li. She received support through York’s Global Research Excellence Seed Fund and almost $100,000 in SSHRC funding for her project with the University of Texas at Austin – Shaping the Future of AI: Artificial Intelligence Governance in Global Dynamics. “Much of the Global South, including South Asia, Africa and the Middle East, remains underrepresented in knowledge production and global policy discourse, reinforcing deep inequalities in institutional capacity and participation.”

Muyang Li photography by Chris Robinson

Her SSHRC-funded research investigates how geopolitical dynamics influence the development and dissemination of AI regulation and research, mapping which countries and institutions act as decision-making hubs. It also looks at how these structures reproduce, or sometimes challenge, existing hierarchies in the world system.

A York Centre for Asian Research Faculty Associate, Li says the AI governance research is heavily concentrated in the United States, China, the European Union and the United Kingdom. These regions lead global discussions due to imbalances in both knowledge production and funding distribution across the world.

“Certain universities and research centres act as global hubs, driving much of the research agenda. This concentration raises concerns about limited perspectives and potential bias,” she says, noting that funding, researchers and publications on AI governance are highly concentrated in a small group of elite institutions.

As AI reshapes societies and economies worldwide, the call from these researchers is clear: the future of AI must be governed by frameworks that are as diverse, inclusive and interconnected as the world it aims to serve.

The post Shaping governance appeared first on Ascend Magazine.

]]>
Redefining consumption /ascend/article/redefining-consumption/ Wed, 28 Jan 2026 14:25:13 +0000 /ascend/?post_type=article&p=702 Marketing Professor Markus Giesler of 첥Ƶ’s Schulich School of Business analyzes how technologies like artificial intelligence (AI) influence consumer behaviour and buying choices, and what it means for free will, whether purchasing a shiny new pair of shoes or the next vacation destination. His work has been published in top-tier academic journals and media […]

The post Redefining consumption appeared first on Ascend Magazine.

]]>
Marketing Professor of 첥Ƶ’s Schulich School of Business analyzes how technologies like artificial intelligence (AI) influence consumer behaviour and buying choices, and what it means for free will, whether purchasing a shiny new pair of shoes or the next vacation destination.

His work has been published in top-tier academic journals and media outlets, such as The New York Times and Time Magazine, and his insights shared with policy-makers to improve markets. We asked him a few questions about how AI is shaping the consumer experience for better or worse.

How is AI influencing or shaping consumer behaviour, buying choices and changing consumption patterns?

At Schulich, I lead several research projects that look at precisely this question. What we see is that AI transforms consumption in two directions at once. On the one hand, it depersonalizes choice by reducing consumers to streams of probabilities. On the other, it reintroduces personalization by tailoring recommendations and experiences back to us. This back-and-forth doesn’t just change what people buy; it reshapes how they see themselves as consumers – more data legible, more responsibilized and more entangled with markets.

It is changing how consumers engage with brands and products, such as with targeted recommendations?

Yes, and our research shows recommendations are just the tip of the iceberg. Consumers increasingly delegate choices to algorithms, often treating them as responsible agents. That changes how trust and loyalty are formed – not only toward brands but also toward the systems that mediate them. At Schulich, we’re documenting how platforms that balance depersonalization and repersonalization – scaling interactions while keeping them personal – are redefining engagement.

How is AI shaping consumer experiences from customer service to interactive shopping?

In our ongoing projects, we observe AI reshaping experiences in ways that are seamless yet paradoxical. AI assistants, chatbots and generative interfaces can enchant shopping by making it interactive and human-like. But they also create new frictions when consumers sense too much automation or a lack of empathy. What emerges, and what we study closely, are “entanglement-centric” experiences – co-created encounters where consumers and AI systems jointly produce meaning and efficiency.

Are there ethical and data privacy concerns with how consumer data is used?

Definitely. Across multiple Schulich studies, we see that AI doesn’t just consume data – it creates new forms of responsibility for consumers. People are asked to manage algorithmic risks, to be “AI-ready consumers.” This moralizes consumption around compliance and fairness, effectively shifting governance onto individuals and marketers. So the ethical challenge is not only about privacy but also about who bears the burden of making AI safe and fair.

What can we expect in the future?

Our work suggests that AI will increasingly be embedded into core institutions like family, health, education and housing. That means consumption itself will be redefined – not just in terms of products and services but also in how people experience intimacy, security and identity through markets. The critical question is whether this embedding of AI will reproduce inequalities or create genuinely human-centred opportunities for consumption.

The post Redefining consumption appeared first on Ascend Magazine.

]]>
Laughing matters /ascend/article/laughing-matters/ Wed, 28 Jan 2026 14:24:43 +0000 /ascend/?post_type=article&p=710 Laugh out loud. Bust a gut. Crack a witticism. Can artificial intelligence (AI) and social robots be programmed to understand the complexities and nuances of human humour and laughter? Will robots of the future understand they should not laugh at a funeral or the difference between slapstick and satire? These are the sorts of questions […]

The post Laughing matters appeared first on Ascend Magazine.

]]>
Laugh out loud. Bust a gut. Crack a witticism. Can artificial intelligence (AI) and social robots be programmed to understand the complexities and nuances of human humour and laughter? Will robots of the future understand they should not laugh at a funeral or the difference between slapstick and satire?

These are the sorts of questions that tickle the mind of 첥Ƶ PhD Candidate Hana Holubec in the Science & Technology Studies program in the Faculty of Graduate Studies.

“That’s my fascination. How do you take something so absolutely dynamic and complex and nuanced, and render it technological, which operates on binaries that require categorizations and taxonomies. It’s this moving target that changes across time and culture. One of my favorite things about laughter is that it defies classification,” says Holubec.

A trainee at the York-led Connected Minds, which explores human-technology interactions and their societal impacts, Holubec’s research looks into the development of AI algorithms to imitate humour and laughter in social robots. She does this work under the supervision of Glendon Campus Associate Professor of the Department of Global Communications and Cultures, director of the .

“Laughter can engender social cohesion, but what type of laughter is being prioritized in AI or social robotic development and what types of laughter are being ignored or erased? That’s my big interest."

“There are aspects of humour that are relatively easy to render technologically. Joke generation has a computational quality and can be quite formulaic, especially wordplay or knock-knock jokes.” But that is only one aspect of humour.

One of the major focuses of social robotics research is programming humour beyond puns and basic jokes, and mimicking human laughter, body movements and facial expressions.

“Social robotics and communicative AI research with the use of laughter is intended to make the user feel like they are interacting in a natural way, in a very human-like way,” says Holubec, who is also a comedy writer and an arts-based instructor within the disability community.

There are ethical and moral concerns. As with any AI algorithms, inequities or harmful class, gender and racial stereotypes are at risk of being propagated in this fast-growing field.

Holubec’s most recent research – Laboratory Laugh: the production of laughter in the ERICA project – studies how researchers in Japan are incorporating AI and humour in their android robot Erica, which currently titters demurely. But who decides how Erica, or any other social robot, laughs and at what?

“Laughter can engender social cohesion, but what type of laughter is being prioritized in AI or social robotic development and what types of laughter are being ignored or erased? That’s my big interest,” says Holubec.

“Within my project at Connected Minds, what I really want to work on is developing a critical humour and laughter database, which anyone who is working on the development of communicative AI and social robotics could access.”

Currently, programmers and developers are drawing from neuroscience, psychology and linguistics, but not feminist and historical methodologies, literature and critical race theory, which Holubec says is an issue. “There’s a very rich breadth of information within those disciplines on humour and laughter that could help mitigate the socio-cultural effects that come with the computational flattening of laughter into AI and robotics.”

Although not the primary focus of her research, an additional concern is the potential for harm. “If the robot laughed a really big, mirthful laugh when a user was telling a disappointing story, this could have a detrimental effect. Essentially, they would feel laughed at.”

And that is no laughing matter.

The post Laughing matters appeared first on Ascend Magazine.

]]>
Two sides of the same microchip /ascend/article/two-sides-of-the-same-microchip/ Wed, 28 Jan 2026 14:24:35 +0000 /ascend/?post_type=article&p=712 As artificial intelligence (AI) weaves its way into many aspects of people’s lives, often in unknown ways, it also raises the risk of hackers exploiting AI’s vulnerabilities and causing real harm. While that might not seem like a big deal when talking about writing assistance or entertainment, such as the use of GenAI for a […]

The post Two sides of the same microchip appeared first on Ascend Magazine.

]]>
As artificial intelligence (AI) weaves its way into many aspects of people’s lives, often in unknown ways, it also raises the risk of hackers exploiting AI’s vulnerabilities and causing real harm.

While that might not seem like a big deal when talking about writing assistance or entertainment, such as the use of GenAI for a building collapse in a Netflix sci-fi series, AI is rapidly becoming integrated into some of the country’s most critical systems – health care, power grids, nuclear power and transportation – and hackers are taking note. AI-enabled cyber threats are capitalizing on vulnerabilities in AI algorithms.

As director of the Behaviour-Centric Cybersecurity Centre (BCCC) at 첥Ƶ, Associate Professor , Canada Research Chair in Cybersecurity, is developing vulnerability detection technology to protect network systems against cyberattacks.

“By linking scientific innovation, creative outreach and international collaboration, we ensure advances in AI-driven cybersecurity contribute to a safer, more informed and globally connected digital society.”

“We are using artificial intelligence both to secure critical technologies and to ensure AI itself remains trustworthy. Our AI-powered models are applied to connected and autonomous vehicles, smart devices, decentralized finance systems and the cloud, where they learn patterns of normal behaviour and flag anomalies before harm occurs,” says Lashkari of the School of Information Technology, Faculty of Liberal Arts & Professional Studies.

“This means detecting malicious signals that could compromise road safety, identifying data leaks from smart homes and detecting fraudulent blockchain transactions across large financial networks.”

Most people will interact with AI via large language models like ChatGPT and Google Gemini, and GenAI platforms, but these systems are increasingly vulnerable to adversarial attacks, data poisoning and malicious misuse.

“Our work develops methods to harden these models, improve their transparency and ensure they remain resilient when deployed in real-world settings. In this way, we are working on both sides of the challenge – using AI to protect people, while also protecting AI from manipulation.”

As a leading cyber threat intelligence centre, the BCCC team investigates innovative ways to secure digital infrastructure by detecting, analyzing and mitigating these threats through real-world challenges.

"We are using artificial intelligence both to secure critical technologies and to ensure AI itself remains trustworthy."

The work is shared in accessible and innovative ways through the Understanding Cybersecurity Series, a global knowledge mobilization program, through books, blogs, open datasets, analytics platforms, workshops and even the international Cybersecurity Cartoon Award. The initiatives are strengthened through national and international collaborations, including with the National Cybersecurity Consortium, research partnerships with Japan’s National Institute of Information and Communications Technology, academic and industry partners in the United States and ongoing work with research teams in Europe, including Ireland, Germany and Italy.

“By linking scientific innovation, creative outreach and international collaboration, we ensure advances in AI-driven cybersecurity contribute to a safer, more informed and globally connected digital society,” says Lashkari.

The post Two sides of the same microchip appeared first on Ascend Magazine.

]]>
Tools of the trade /ascend/article/tools-of-the-trade/ Wed, 28 Jan 2026 14:24:26 +0000 /ascend/?post_type=article&p=699 Talk to any researcher in the artificial intelligence (AI) space and their excitement for the possibilities of how it could transform many aspects of health care is palpable, and for good reason. They are developing ethical AI tools that can be integrated into clinical elements in ways that could bring us that much closer to […]

The post Tools of the trade appeared first on Ascend Magazine.

]]>
Talk to any researcher in the artificial intelligence (AI) space and their excitement for the possibilities of how it could transform many aspects of health care is palpable, and for good reason. They are developing ethical AI tools that can be integrated into clinical elements in ways that could bring us that much closer to precision medicine, where treatments would be customized to each patient and play a powerful role in improving outcomes.

AI can analyze huge, data-rich medical images and enormous quantities of data much faster than a human, but also sometimes better, observing the tiniest of details or changes that doctors cannot see, but that could necessitate a different course of patient treatment. These AI tools can also provide outstandingly accurate predictive analyses.

When it comes to cancer and liver transplant patients, it could mean the difference between a poor outcome and a higher survival rate. 첥Ƶ researchers of the Lassonde School of Engineering and Divya Sharma of the Faculty of Science are developing AI tools for specific tasks that in some cases give clinicians information they otherwise would not have, with real-world implications for patients.

An associate professor, Sadeghi-Naini is developing AI tools coupled with imaging for brain, ovarian and breast cancers to characterize, monitor and predict different biological processes.

“We can scan these patients ahead of time using state-of-the-art ultrasound..."

He is the principal investigator of a new research project in collaboration with Women’s College Hospital with funding from the New Frontiers in Research Fund to develop a cost-effective, accessible AI platform to analyze the digital pathology images of ovarian cancer. The goal is to determine whether the patient has a genetic condition called homologous recombination deficiency without performing expensive genomic testing.

“It is an important factor in determining if the patient can benefit from available targeted therapies or not, but currently it requires genomic instability analysis to find out that is costly and not always accessible,” says Sadeghi-Naini, director of the Quantitative Imaging and Biomarkers Laboratory at York. That project is just beginning.

“I am also leading projects in collaboration with Sunnybrook Health Sciences Centre to develop AI frameworks that analyze digital pathology images of routine biopsy samples to predict treatment outcomes for individual breast cancer patients before they go through chemotherapy, to predict their response to treatment. It shows very promising results.”

Innovation York and Sunnybrook, where Sadeghi-Naini is a cross-appointed scientist, are currently in partnership to commercialize a couple of those tools for use.

In about 30 per cent of cases, chemotherapy does not work to shrink tumours effectively, as is the case with some high-risk breast cancers, but currently this is often determined months later after the completion of chemotherapy. “We can scan these patients ahead of time using state-of-the-art ultrasound to acquire raw signal data that after signal processing will generate quantitative ultrasound parametric images.”

The tool can then analyze those images deeper, faster and in more detail, as well as predict patient response to chemo before or shortly after it starts. It is important information that would allow oncologists to choose different treatment options if a chemo regimen is predicted not to work, which could significantly alter the survival rate of those patients who do not respond well.

“Studies show that the response of patients to upfront chemotherapy is linked to survival. Good responders show significantly better survival compared to poor responders,” says Sadeghi-Naini.

He is also working on an AI solution to a different problem, this time for brain cancer patients.

Following stereotactic radiotherapy for brain tumours, there is an up to 25 per cent chance a patient will experience radiation necrosis, a complication that can occur months to years later and is difficult for doctors to discern from brain tumour recurrence or progression.

“The problem here is that on the standard anatomical imaging, they appear very similar to each other,” he says. “That’s a challenge because radiation necrosis and tumour progression are two quite different things with different treatment approaches.”

In a recent study involving more than 90 patients with 230 brain tumours, Sadeghi-Naini and the team developed an AI platform that can analyze images of the brain using a new advanced MRI technique. Manually analyzing tumours on this multi-channel MRI is complicated, but with AI-guided methods it is much easier to distinguish between radiation necrosis, tumour progression or tumour recurrence.

“Our AI tools will not only help to predict but also improve long-term health outcomes for transplant patients by reducing disparities.”

He also leads development of an AI system to streamline analysis of repeated MRI scans for each brain cancer patient, a faster process that can better monitor and categorize tumour changes from one scan to another. In addition, he is working on an AI platform that can analyze early imaging of brain tumours and detect features invisible to the human eye, but that can provide information on the long-term outcome of the tumour, which may require a change in treatment.

“These are all cost-effective AI decision support tools for oncologists that inform personalized treatments and streamline their daily workflow, ultimately contributing to better patient care,” says Sadeghi-Naini. His research has garnered funding from the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Institutes of Health Research (CIHR), the National Research Council of Canada, the Terry Fox Foundation and others.

Sadeghi-Naini does not think AI can replace oncologists or radiologists, but says, “it can provide valuable complementary information, improve accessibility to precision therapeutics, save time and resources, and streamline and triage more complicated cases for expert review,” all providing added benefits to patients.

Sharma, an assistant professor and co-principal investigator on two recent CIHR project grants worth close to $3 million, is creating more equitable access to liver transplants through a national framework and developing a multimodal AI tool to improve success rates following liver transplantation.

The goal of the five-year national framework project with the University Health Network (UHN) and others is to understand the roadblocks to preventing fair access for all patients on the liver transplant waitlist, create an ethical, data-driven AI model framework to ensure equal access to donor organs going forward, and improve post-transplant outcomes.

Divya Sharma Photograph Chris Robinson

“As part of developing an equitable AI-driven framework, we will include diverse voices in its development process. We will also analyze data on liver disease and transplants for all patients, regardless of race, socioeconomic status, sex and gender, to identify and address inequalities,” says Sharma, who leads York’s IMPACT-AI lab and is a scientist at UHN.

“Our AI tools will not only help to predict but also improve long-term health outcomes for transplant patients by reducing disparities.”

Some three million Canadians from all sectors of society are affected by liver disease. Although transplants can be life saving for those with end-stage liver disease, access is not equal and about 5,000 patients die from end-stage liver disease annually.

“Building trust and understanding around the new AI technology is an important piece of our project. By talking with patients, doctors and technology experts about what our AI model will do and how it can improve the process and the outcome, it can help ensure the adoption and clinical success of the framework,” says Sharma, who earned a 2025 Petro-Canada Emerging Innovator Award and a New Frontiers in Research Fund grant to develop genomic data-driven generative AI for pancreatic cancer.

“Ensuring the framework model is ethical from the beginning is key in reversing inequities to liver transplants some patients currently experience across Canada.”

"We will also analyze data on liver disease and transplants for all patients, regardless of race, socioeconomic status, sex and gender, to identify and address inequalities."

However, once a patient receives a liver transplant there is a high potential for serious complications. Up to 25 per cent of recipients will develop graft fibrosis or scarring from immunosuppressant medications or through organ rejection. Sharma’s second five-year project with UHN hopes to address this by developing a multimodal AI tool to predict patients at high risk of graft scarring.

“Our AI tools will not only help to predict but also improve long-term health outcomes for transplant patients by reducing disparities.”

Sharma says they previously used clinical and laboratory data from about 2,000 transplant recipients to develop a mathematical model to diagnose the condition. They will now expand the model’s capabilities using pathology and ultrasound imaging data so that it can also predict the future risk of scarring. The hope is it will lead to earlier diagnosis, and the development of better prevention and treatment strategies to improve outcomes.

The work of both projects are designed to have clinical benefits in hospitals and transplant centres that will result in improved and more equitable patient care. Sharma is also co-first author on a recent paper in the journal , which highlights the team’s work with GraftIQ, a neural network model designed to be a non-invasive diagnostic tool for liver graft injury.

These are some of the ways York researchers are capitalizing on the ability of well-designed, ethical and safe AI tools to provide real health benefits to patients, now and into the future.

The post Tools of the trade appeared first on Ascend Magazine.

]]>
Funding the future /ascend/article/funding-the-future/ Wed, 28 Jan 2026 14:24:07 +0000 /ascend/?post_type=article&p=691 THROUGH THE YORK-LED CONNECTED MINDS: Neural and Machine Systems for a Healthy, Just Society initiative, five teams from 첥Ƶ and Queen’s University received $1.5 million each to tackle everything from artificial intelligence- (AI) driven communication technologies for Canadians with speech impairments to wearable electroencephalogram (EEG) devices to better monitor epilepsy. The new funding further […]

The post Funding the future appeared first on Ascend Magazine.

]]>
THROUGH THE YORK-LED CONNECTED MINDS: Neural and Machine Systems for a Healthy, Just Society initiative, five teams from 첥Ƶ and Queen’s University received $1.5 million each to tackle everything from artificial intelligence- (AI) driven communication technologies for Canadians with speech impairments to wearable electroencephalogram (EEG) devices to better monitor epilepsy. The new funding further enhances York’s commitment to the field of AI. York took the top spot among Canada’s comprehensive universities for advancing understanding of AI in artificial intelligence publications in the latest edition of Canada’s Innovation Leaders, by Research Infosource Inc. The inaugural Connected Minds Team Grants, funded through the Canada First Research Excellence Fund, will help develop unbiased AI and creative technology tools that will benefit all of society equally.

Creative Collectivities: Rehearsing Equitable Futures Through Participatory Technologies

Led by Professor of 첥Ƶ’s School of Arts, Media, Performance & Dance (AMPD) and Assistant Professor Michael Wheeler of Queen’s University, this team will explore how AI, virtual reality and immersive theatre can reshape social connection and collective behaviour. Collaborating with equity-focused theatre companies and community groups representing Indigenous, 2SLGBTQIA+, racialized and disabled communities, they will co-create experimental platforms centred on diverse voices and expand access to cultural participation.

Laura Levin

Wearable EEG for Personalized Epilepsy Management

Current epilepsy monitoring tools can be uncomfortable, inaccessible and limited when supporting real-time care at home. Led by 첥Ƶ Associate Professor of York’s Lassonde School of Engineering and Queen’s University Professor Gavin Winston, this team is developing a smart, wearable EEG headset device designed for clinical accuracy, long-term comfort and ethical use in everyday environments. The device integrates AI-powered chips to detect abnormalities and forecast seizures in real time, while accounting for diverse anatomical and hair-type differences.

Hossein Kassiri

Co-creating Intelligent Neuro-technologies for Healthy Aging (CINTHEA)

Older adults often face challenges related to mobility, cognitive health and social isolation. Led by 첥Ƶ Professor of York’s Lassonde School of Engineering and Queen’s University Associate Professor Vincent DePaul, the team is developing AI-powered systems, such as lab-grade mobile assessments and socially assistive robots, to monitor the cognitive, physical and social well-being of older adults while promoting independence and connection.

James Elder

When People Talk, Listen Completely

Canadians with speech impairments face significant barriers to employment, often due to stigma and a lack of accessible workplace supports. A team, led by Queen’s University Associate Professor Claire Davies and co-led by 첥Ƶ Associate Professor of AMPD, is developing AI-driven communication technologies, educational tools and workplace strategies to improve employment access for Canadians with speech impairments. The team is advancing four interconnected research streams: AI-powered assistive technologies, inclusive workplace design, employer education and long-term strategies for equity in employment.

Shital Desai

The Biskaabiiyaang Indigenous Metaverse: Ethical Virtual Environments Rooted in Indigenous Knowledge

Indigenous communities face ongoing barriers to cultural preservation and digital sovereignty in spaces often shaped by colonial frameworks. This project is led by 첥Ƶ Associate Professor , Glendon Campus, a research associate in the Centre for Indigenous Knowledges and Languages, and Associate Professor of York’s AMPD and program coordinator of creative technologies at the Markham Campus. It blends Anishinaabe knowledge with immersive technology to create Biskaabiiyaang, an Indigenous-governed virtual learning environment designed to support language revitalization, cultural resurgence and healing. Co-created with Indigenous communities, this project charts a path for ethical innovation that advances Indigenous cultural resurgence, strengthens digital sovereignty and reshapes how technology serves diverse knowledge systems. Chacaby also received Social Sciences and Humanities Research Council funding for this project.

Maya Chacaby, Rebecca Caines

The post Funding the future appeared first on Ascend Magazine.

]]>