Artificial intelligence Archives - News@York /news/tag/artificial-intelligence/ Thu, 09 Apr 2026 20:00:21 +0000 en-CA hourly 1 https://wordpress.org/?v=6.9.4 Artificial Intelligence for Public Health Advancement launches at 첥Ƶ /news/2026/04/09/artificial-intelligence-for-public-health-advancement-launches-at-york-university/ Thu, 09 Apr 2026 18:50:44 +0000 /news/?p=23637 Today, the University announced the launch of a new Centre of Excellence – Artificial Intelligence for Public Health Advancement (AIPHA) funded through an Ontario Research Fund – Research Excellence program.

The post Artificial Intelligence for Public Health Advancement launches at 첥Ƶ appeared first on News@York.

]]>

The new Centre of Excellence will bring together multiple disciplines across the University to develop and deploy artificial intelligence systems to improve health care

TORONTO, April 9, 2026 – As artificial intelligence (AI) becomes increasingly important, especially in the health-care field, 첥Ƶ continues to play an outsized role. Today, the University announced the launch of a new Centre of Excellence – Artificial Intelligence for Public Health Advancement (AIPHA) funded through an Ontario Research Fund – Research Excellence program.

Director General, Applied Public Health Sciences Pamela Ponic of the Science and Policy Integration Branch, Public Health Agency of Canada, Chief Medical Officer of Health for the Ontario Ministry of Health Kieran Moore, and MPP for Whitby Lorne Coe, Parliamentary Assistant to the Minister of Colleges, Universities, Research Excellence and Security all spoke at the lunch-time event.

Interim President and Vice-Chancellor Lisa Philipps

As a key player in supporting health-care decision making, AIPHA will strengthen external partnerships and accelerate the transfer of knowledge from research to policy and practice, where hospitals, medical practitioners, policymakers and leaders can use it. It will also help bridge the gap between health analytics and real-world socioeconomic conditions further positioning York as a national and global leader in AI-integrated public health solutions through research and innovation.

"The launch of AIPHA marks a defining moment for 첥Ƶ and for the future of public health in Canada. By bringing together expertise across disciplines, from mathematical modelling and AI to health policy and social equity, we are creating something truly transformative: a hub where research doesn't just advance knowledge but directly shapes the decisions that protect and improve people's lives,” says 첥Ƶ Interim President and Vice-Chancellor Lisa Philipps. “York has long been committed to addressing society's most pressing challenges, and AIPHA reflects that mission at its fullest. We are proud to be building the next generation of AI-adept public health leaders right here, and to be positioning Canada as a global force in equitable, evidence-informed health innovation."

From left, AIPHA Scientific Director Seyed Moghadas, Faculty of Science Dean Maydianne Andrade, AIPHA Director Jianhong Wu
From left, AIPHA Scientific Director Seyed Moghadas, Faculty of Science Dean Maydianne Andrade, AIPHA Director and University Distinguished Research Professor Jianhong Wu

As a dedicated, multi-disciplinary and national hub, AIPHA will bring together expertise from across faculties, including in advanced mathematical and computational modelling, precision analytics and multi-source databases. The goal is to integrate epidemiological, clinical, environmental, climate and socioeconomic indicators in newly created AI models, while training the next generation.

“Artificial intelligence tools are increasingly being used in health-care settings but a coordinated, ethical and equitable approach to ensure the tools use integrated data sources, and that they are being properly tested and deployed for patient good is lacking. The current speed of newly developed AI models is at times outpacing governance,” says Faculty of Science Dean Maydianne Andrade. “As a new Centre of Excellence in the Faculty of Science, AIPHA will lead the way toward better integration of these new technologies in a cohesive manner that will help advance public health care.”

Two projects already underway include: Integrating AI with disease transmission dynamics models for informed prevention and control of outbreaks in indoor and mass gathering settings (2025 to 2031) and Advanced mathematical technologies for respiratory infection risk assessment and pharmaceutical intervention scenario analysis (2024 to 2028), both led by AIPHA’s inaugural director Jianhong Wu.

“The AI for Public Health Research Centre is a coordinated innovation hub that will help improve health-care efficiency and outcomes, as well as ensure coordinated, ethical and equitable transformation of public health systems,” says AIPHA Director and University Distinguished Research Professor Jianhong Wu of the Faculty of Science. “This kind of central hub is much needed in the health-care sector today to ensure emerging AI tools are properly integrated and decision and policy makers are provided with robust information toward developing a more cohesive, ethical and equitable public health-care system.”

Faculty of Science Dean Maydianne Andrade

AIPHA will integrate epidemiological, clinical, environmental and socioeconomic data into the AI-enabled decision-support systems it develops and deploys to guide equitable and evidence-informed public health action. In ensuring the development of fair and equitable AI systems, the hub will combine not only advanced mathematical and computational modelling and AI and predictive analytics, but also health systems and policy research with social determinants of health and equity frameworks.

AIPHA will act as a research accelerator for large collaborative grants, train the next generation of AI-adept public health leaders and develop pilot AI-integrated protypes for infectious disease modelling, health system resource allocation and climate health risk forecasting.

It will also strengthen Canadian pandemic and emergency preparedness, enhance evidence-based policymaking, support climate-health adaptation strategies, improve health equity outcomes, and increase York’s national visibility in AI governance and public health innovation.

AIPHA Director Jianhong Wu at the launch of the new Centre of Excellence

About 첥Ƶ

첥Ƶ is a modern, multi-campus, urban university located in Toronto, Ontario. Backed by a diverse group of students, faculty, staff, alumni and partners, we bring a uniquely global perspective to help solve societal challenges, drive positive change, and prepare our students for success. York's fully bilingual Glendon Campus is home to Southern Ontario's Centre of Excellence for French Language and Bilingual Postsecondary Education. York’s campuses in Costa Rica and India offer students exceptional transnational learning opportunities and innovative programs. Together, we can make things right for our communities, our planet, and our future.

Media Contact: Sandra McLean, 첥Ƶ Media Relations, 416-272-6317, sandramc@yorku.ca 

The post Artificial Intelligence for Public Health Advancement launches at 첥Ƶ appeared first on News@York.

]]>
Rethinking brain-like artificial intelligence: A new study reveals hidden mismatches /news/2026/03/25/rethinking-brain-like-artificial-intelligence-a-new-study-reveals-hidden-mismatches/ Wed, 25 Mar 2026 17:18:00 +0000 /news/?p=23568 A new study by 첥Ƶ researchers have found a potential striking flaw in artificial intelligence (AI) models.

The post Rethinking brain-like artificial intelligence: A new study reveals hidden mismatches appeared first on News@York.

]]>

TORONTO, March 25, 2026 – A new study by 첥Ƶ researchers have found a potential striking flaw in artificial intelligence (AI) models.

Artificial neural networks (ANNs), a type of AI model built to solve vision tasks for computers, has surprisingly emerged as the . But does current AI really work like a primate brain?

“Artificial intelligence systems are often described as ‘brain-like’ because they can predict activity in parts of the brain that help us recognize objects,” says 첥Ƶ Assistant Professor Kohitij Kar, senior author of a new study. “Until now, scientists mostly tested this in one direction. They asked whether AI models can predict brain activity.”

Kohitij Kar

In this study, the researchers flipped the question – if AI truly mirrors the brain, shouldn’t brain activity also be able to predict what’s happening inside the AI model? – and developed a reverse predictivity test to find the answer.

“Ultimately, we need computational models to truly understand the underlying neural mechanisms of how we recognize objects. How do we see objects move? While it's a very easy task that we do every day, computationally, though, it's a very challenging problem,” says Kar, the Canada Research Chair in Visual Neuroscience and a member of York’s Centre for Vision Research and Centre for Integrative and Applied Neuroscience.

The researchers, including York Postdoctoral Fellow Sabine Muzellec, a Connected Minds trainee, used 1,320 natural or naturalistic synthetic images of a bear, an elephant, a face, an apple, a car, a dog, a chair, a plane, a bird and a zebra, placed against natural, indoor or outdoor background scenes. They also used an additional 300 images of the same objects rendered differently, such as outlines, drawing, schematized forms and artistic variations.

“The results were striking. While AI models can predict the neurons we recorded in the brain fairly well, the brain cannot equally predict many of the model’s internal features. Interestingly, this is not the case when neurons from one brain is compared against ones from another brain,” says Kar.

The problem with the ANNs solving vision differently is that this difference between primate brains and models will widen and compound over time if not corrected now. The direction of prediction was always to have the model predict like neurons, but if the reverse is not true then these models don't really serve as good hypotheses for the brain, adds Kar.

“The findings suggest that today’s AI systems solve visual tasks partly using internal strategies that the brain may not use. Importantly, the parts of AI models that align with the brain are also better at predicting real human behavior,” says Kar.

Why this matters

AI models are increasingly used to help design experiments to understand human behavior, including in clinical settings. It is assumed AI model see the world similarly to how a human brain does.

“Our findings challenge how similar current AI systems really are with the primate brain. We show that models that were previously thought to be brain-like rely on internal components that the brain does not appear to use. We provide a well vetted diagnostic metric for the field,” says Muzellec.

If AI models can become more brain-like, they could in the future help people with everything from post-traumatic stress disorder to autism, but for now, their use in experiments and to understand human behaviour is fraught. Similar models are also being used now for auditory systems, language systems and motor systems, but again, if they aren’t working as expected that’s an issue.

“Our approach helps identify which parts of an ANN truly match brain activity, allowing us to build more reliable models for understanding how people see and interpret the world,” says Kar. “This is especially important for our autism research program, which builds on models of the neurotypical brain as a baseline.”

The study’s authors have also made a available for AI developers to use to both test and improve their models going forward.

The , published today in Nature Machine Intelligence, introduces a new standard for building AI that is not just powerful – but truly brain-aligned.

The post Rethinking brain-like artificial intelligence: A new study reveals hidden mismatches appeared first on News@York.

]]>
Novel AI technique able to distinguish between progressive brain tumours and radiation necrosis, 첥Ƶ study finds /news/2025/12/08/novel-ai-technique-able-to-distinguish-between-progressive-brain-tumours-and-radiation-necrosis-york-u-study-finds/ Mon, 08 Dec 2025 16:00:00 +0000 /news/?p=23273 While targeted radiation can be an effective treatment for brain tumours, subsequent potential necrosis of the treated areas can be hard to distinguish from the tumours on a standard MRI. A new study led by a 첥Ƶ professor in the Lassonde School of Engineering found that a novel AI-based method is better able to distinguish between the two types of lesions on advanced MRI than the human eye alone, a discovery that could help clinicians more accurately identify and treat the issues.

The post Novel AI technique able to distinguish between progressive brain tumours and radiation necrosis, 첥Ƶ study finds appeared first on News@York.

]]>

Professor says this could lead to better treatments for late-stage cancer patients

TORONTO, Dec. 8 2025 — While targeted radiation can be an effective treatment for brain tumours, subsequent potential necrosis of the treated areas can be hard to distinguish from the tumours on a standard MRI. led by a 첥Ƶ professor in the Lassonde School of Engineering found that a novel AI-based method is better able to distinguish between the two types of lesions on advanced MRI than the human eye alone, a discovery that could help clinicians more accurately identify and treat the issues.

Headshot of Ali Sadeghi Naini
York Research Chair and Professor Ali Sadeghi Naini, lead author on the study.

“The study shows, for the first time, that novel attention-guided AI methods coupled with advanced MRI can differentiate, with high accuracy, between tumour progression and radiation necrosis in patients with brain metastasis treated with stereotactic radiosurgery,” says York Research Chair Ali Sadeghi-Naini, senior author of the paper and associate professor of biomedical engineering and computer science. “Timely differentiation between tumour progression and radiation necrosis after radiotherapy in brain tumours is a crucial challenge in cancer centers, since these two conditions require quite different treatment approaches.”

The proposed AI model architecture. The model processes multi-channel 3D input volumes. Within each block, attention is computed through four mechanisms.

The study, published in the International Journal of Radiation Oncology, Biology, Physics, was conducted in close collaboration with imaging scientists, neuro-oncologists and neuro-radiologists at Sunnybrook Health Sciences Centre using data acquired from more than 90 cancer patients whose original cancer had metastasized to the brain.

Sadeghi-Naini says the incidence of brain metastasis  is rising as treatments improve and survival rates increase. Stereotactic radiosurgery (SRS), where a concentrated doses of radiation are applied to the cancer lesions only, is effective at controlling the tumours.  In up to 30 per cent of cases, SRS is not able to control the tumour and it continues to grow. Where it is successful, healthy brain tissue immediately surrounding the tumour may also die off, called brain radiation necrosis, and it can come with significant side effects.

Sadeghi-Naini and his colleagues introduced a 3D deep learning AI model with two advanced attention mechanisms to differentiate between tumour progression and radiation necrosis using a specialized MRI technique, called chemical exchange saturation transfer (CEST), and found that the AI was able to differentiate between the two conditions with over 85 per accuracy. Sadeghi-Naini says with a standard MRI the two conditions are accurately diagnosed about 60 per cent of the time, and with more advanced MRI techniques alone, the rate increases to about 70 per cent.

“Differentiating tumour progression and  radiation necrosis is very important — one needs more anti-cancer therapies and may need to be aggressively treated with more radiation, sometimes surgery.  The other may require observation, anti-inflammatory drugs, so getting this right is crucial for patients.”

-30-

첥Ƶ is a modern, multi-campus, urban university located in Toronto, Ontario. Backed by a diverse group of students, faculty, staff, alumni and partners, we bring a uniquely global perspective to help solve societal challenges, drive positive change, and prepare our students for success. York's fully bilingual Glendon Campus is home to Southern Ontario's Centre of Excellence for French Language and Bilingual Postsecondary Education. York’s campuses in Costa Rica and India offer students exceptional transnational learning opportunities and innovative programs. Together, we can make things right for our communities, our planet, and our future.

Media Contact:

Emina Gamulin, 첥Ƶ Media Relations, 437-217-6362, egamulin@yorku.ca

The post Novel AI technique able to distinguish between progressive brain tumours and radiation necrosis, 첥Ƶ study finds appeared first on News@York.

]]>
첥Ƶ and Georgina expand their collaboration /news/2025/11/05/york-u-and-georgina-expand-their-collaboration/ Wed, 05 Nov 2025 21:15:46 +0000 /news/?p=23100 첥Ƶ and the Town of Georgina signed a Memorandum of Understanding (MOU) today to formalize their long-standing collaboration in key areas of mutual interest.  

The post 첥Ƶ and Georgina expand their collaboration appeared first on News@York.

]]>
Rhonda Lenton, 첥Ƶ President and Vice-Chancellor, and Ryan Cronsberry, Chief Administrative Officer, Town of Georgina

GEORGINA, Nov. 5, 2025 – 첥Ƶ and the Town of Georgina signed a Memorandum of Understanding (MOU) today to formalize their long-standing collaboration in key areas of mutual interest.  

The five-year MOU outlines areas where Georgina and York can collaborate to benefit both the community and the University and further develop their shared interest in primary care, local economic development, entrepreneurship initiatives and knowledge mobilization.

One of the goals is to expand the current relationship which began even before the opening of YSpace Georgina Business Incubator/Accelerator Hub in 2022, York’s pan-university entrepreneurship and innovation hub. YSpace offers support for local businesses, including opportunities for learning and collaboration.

“첥Ƶ is proud to deepen our partnership with the Town of Georgina through this new Memorandum of Understanding,” says Rhonda Lenton, president and vice-chancellor of 첥Ƶ. “By working together, we are expanding opportunities for experiential learning, supporting local innovation, and driving positive change in our communities. This collaboration reflects our shared commitment to advancing primary care, economic development, entrepreneurship and prosperity, while empowering our students and partners to make a meaningful impact – locally and beyond.”

Town of Georgina Mayor Margaret Quirk

The MOU will encourage alignment and open avenues for experiential learning opportunities for York students that will also benefit Georgina, along with professional development and training opportunities, involvement with capstone projects and community education.

“This MOU reflects the strong and growing relationship between the Town of Georgina and 첥Ƶ,” said Mayor Margaret Quirk. “Through initiatives like YSpace Georgina, we’ve seen firsthand how collaboration can drive innovation, support local businesses, and create meaningful opportunities for our residents. We look forward to continuing this work together to build a more vibrant and resilient community.”

The University and the Town also share an interest in supporting projects and initiatives involving research and artificial intelligence that will foster benefits for both the University and Georgina. The MOU will solidify the relationship between them which has already delivered benefits to both communities.

About Town of Georgina

Georgina is a lakeshore town in one of Canada’s fastest-growing economic regions, York Region, where residents, organizations and businesses work collaboratively with the municipality to create a well-connected and diversified economy poised for growth. Georgina is a community of communities, with each area having a unique and historical identity, all united and proud to collectively call Georgina home. With 52-kilometres of Lake Simcoe shoreline and only an hour from Toronto, Georgina offers a balanced rural and urban lifestyle, making it a desired location to live, work and play.

About 첥Ƶ

첥Ƶ is a modern, multi-campus, urban university located in Toronto, Ontario. Backed by a diverse group of students, faculty, staff, alumni and partners, we bring a uniquely global perspective to help solve societal challenges, drive positive change and prepare our students for success. York's fully bilingual Glendon Campus is home to Southern Ontario's Centre of Excellence for French Language and Bilingual Postsecondary Education. York’s campuses in Costa Rica and India offer students exceptional transnational learning opportunities and innovative programs, while at the Markham Campus, innovation, technology, entrepreneurship, and industry collaboration are built into every program. York’s new School of Medicine, the first Canadian medical school to focus on community-based primary health-care education, will welcome its first cohort in September 2028.

Media Contacts:

Tanya Thompson, Town of Georgina, 905-476-4301 Ext. 2446, tathompson@georgina.ca

Sandra McLean, 첥Ƶ Media Relations, 416-272-6317, sandramc@yorku.ca

The post 첥Ƶ and Georgina expand their collaboration appeared first on News@York.

]]>
New research finds specific learning strategies can enhance AI model effectiveness in hospitals /news/2025/06/04/new-research-finds-specific-learning-strategies-can-enhance-ai-model-effectiveness-in-hospitals/ Wed, 04 Jun 2025 15:18:56 +0000 /news/?p=22300 If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-the world data, it could lead to patient harm. A new study

The post New research finds specific learning strategies can enhance AI model effectiveness in hospitals appeared first on News@York.

]]>

TORONTO, June 4, 2025 – If data used to train artificial intelligence models for medical applications, such as hospitals across the Greater Toronto Area, differs from the real-world data, it could lead to patient harm. A new study out today from 첥Ƶ found proactive, continual and transfer learning strategies for AI models to be key in mitigating data shifts and subsequent harms.

To determine the effect of data shifts, the team built and evaluated an early warning system to predict the risk of in-hospital patient mortality and enhance the triaging of patients at seven large hospitals in the Greater Toronto Area.

The study used GEMINI, Canada’s largest hospital data sharing network, to assess the impact of data shifts and biases on clinical diagnoses, demographics, sex, age, hospital type, where patients were transferred from, such as an acute care institution or nursing home, and time of admittance. It included 143,049 patient encounters, such as lab results, transfusions, imaging reports and administrative features.

Elham Dolatabadi headshot
Elham Dolatabadi

“As the use of AI in hospitals increases to predict anything from mortality and length of stay to sepsis and the occurrence of disease diagnoses, there is a greater need to ensure they work as predicted and don’t cause harm,” says senior author 첥Ƶ Assistant Professor of York’s School of Health Policy and Management, Faculty of Health, a member of Connected Minds and a faculty affiliate at the Vector Institute.

“Building reliable and robust machine learning models, however, has proven difficult as data changes over time creating system unreliability.”

The data to train clinical AI models for hospitals and other health-care settings need to accurately reflect the variability of patients, diseases and medical practices, she adds. Without that, the model could develop irrelevant or harmful predictions, and even inaccurate diagnoses. Differences in patient subpopulations, staffing, resources, as well as unforeseen changes to policy or behaviour, differing health-care practices between hospitals or an unexpected pandemic, can also cause these potential data shifts.

“We found significant shifts in data between model training and real-life applications, including changes in demographics, hospital types, admission sources, and critical laboratory assays,” says first author Vallijah Subasri, AI scientist at University Health Network. “We also found harmful data shifts when models trained on community hospital patient visits were transferred to academic hospitals, but not the reverse.”

To mitigate these potentially harmful data shifts, the researchers used a transfer learning strategies, which allowed the model to store knowledge gained from learning one domain and apply it to a different but related domain and continual learning strategies where the AI model is updated using a continual stream of data in a sequential manner in response to drift-triggered alarms.

Although machine learning models usually remain locked once approved for use, the researchers found models specific to hospital type which leverage transfer learning, performed better than models that use all available hospitals.

Using drift-triggered continual learning helped prevent harmful data shifts due to the COVID-19 pandemic and improved model performance over time.

Depending on the data it was trained on, the AI model could also have a propensity for certain biases leading to unfair or discriminatory outcomes for some patient groups. 

“We demonstrate how to detect these data shifts, assess whether they negatively impact AI model performance, and propose strategies to mitigate their effects. We show there is a practical pathway from promise to practice, bridging the gap between the potential of AI in health and the realities of deploying and sustaining it in real-world clinical environments,” says Dolatabadi.

The study is a crucial step towards the deployment of clinical AI models as it provides strategies and workflows to ensure the safety and efficacy of these models in real-world settings.

“These findings indicate that a proactive, label-agnostic monitoring pipeline incorporating transfer and continual learning can detect and mitigate harmful data shifts in Toronto’s general internal medicine population, ensuring robust and equitable clinical AI deployment,” says Subasri.

The paper, , was published today in the journal JAMA Network Open.

About 첥Ƶ

첥Ƶ is a modern, multi-campus, urban university located in Toronto, Ontario. Backed by a diverse group of students, faculty, staff, alumni and partners, we bring a uniquely global perspective to help solve societal challenges, drive positive change, and prepare our students for success. York's fully bilingual Glendon Campus is home to Southern Ontario's Centre of Excellence for French Language and Bilingual Postsecondary Education. York’s campuses in Costa Rica and India offer students exceptional transnational learning opportunities and innovative programs. Together, we can make things right for our communities, our planet, and our future.

Media Contact: Sandra McLean, 첥Ƶ Media Relations, 416-272-6317, sandramc@yorku.ca 

The post New research finds specific learning strategies can enhance AI model effectiveness in hospitals appeared first on News@York.

]]>
Experimenting with generative AI to kibbitz and futz towards more inclusive futures /news/2025/06/02/experimenting-with-generative-ai-to-kibbitz-and-futz-towards-more-inclusive-futures/ Mon, 02 Jun 2025 19:07:34 +0000 /news/?p=22366 What does it mean to think, act and work as a Jewish professor when human freedoms are under siege and authoritarian power gains ground? And how can we draw on our Jewish identities to navigate the sweeping encroachment of new technologies like AI? As communication scholars, colleagues and collaborators, we have spent a lot of […]

The post Experimenting with generative AI to kibbitz and futz towards more inclusive futures appeared first on News@York.

]]>

What does it mean to think, act and work as a Jewish professor when human freedoms are under siege and authoritarian power gains ground? And how can we draw on our Jewish identities to navigate the sweeping encroachment of new technologies like AI?

As communication scholars, colleagues and collaborators, we have spent a lot of time trying to answer these questions in our by taking cues from the of our .

Lately, Donald Trump’s administration has demonstrated a heavy investment in cataloguing and categorizing Jewish professors. In April, the Equal Employment Opportunity Commission (EEOC) to the personal cellphones of faculty and staff at Barnard College, asking them to self-identify as Jewish and/or Israeli. The text message also asked them to disclose any instances of antisemitic discrimination or harassment they had experienced.

Presumably, the text message inquiry itself was not recognized by its senders as an instance of .

We do not believe being a Jewish professor means silencing our students as they protest atrocities in Gaza, and it certainly doesn’t mean . Rather, it means drawing upon the tools of our forebears to question systems of oppression, wherever and however they may arise.

We simultaneously occupy within the university and North American society at large. This makes us acutely aware of how fragile conditional tolerance is, and how quickly can be used to justify repression or violence.

Collection and use of data

As communication and media scholars, we’re often critical of how . The EEOC questionnaire concerns us because it reduces the complexities of Jewish identity and the profound harms of antisemitism to a handful of abstract and ideologically determined data points.

Our recent (genAI) and its incompatibility with Jewish cultural expression shows that meaningful efforts to combat antisemitism — and other forms of oppression — must centre the knowledge and experiences of affected communities.

Our research found that outputs of chatbots such as ChatGPT are in a Jewish comedic style without resorting to offensive tropes. In another forthcoming study, we argue that genAI is equally incapable of representing the multifaceted “” of Jewish people except by smashing together rudimentary cultural signifiers (such as rainbows for queerness or bagels for Jewishness).

In each case, these platforms rely on datasets to determine what Jewishness is, and these datasets originate from the narratives that other people tell about Jewish people, rather than the ones we tell about ourselves.

Critical strategies

These platforms have increasingly become parts of daily life and communicative infrastructure. To investigate them, we adopted two critical strategies from our shared heritage as Ashkenazi Jews: and .

Both terms are Yiddish. Kibbitzing is a lively, informal way of thinking and talking together. It’s somewhere between joking, arguing and exchanging ideas. It is grounded in our relationships, histories and biases; kibbitzing is how we make shared meaning together through many voices.

Kibbitzing values contradiction, humour and the messiness of human conversation. Unlike AI chatbots, which follow scripted, dialogic, based on , kibbitzing is .

When we kibbitz, we build understanding by challenging one another and reflecting on what each of us brings to the table. In the age of genAI, kibbitzing offers a way to talk that is , laughter and deep, collective insight.

Futzing means messing around via hands-on experimentation, with no set agenda and no official guidance. This unstructured inquiry is an acknowledgement of Jews’ . As we write in our forthcoming article, these practices reflect what social theorist Michel de Certeau calls “,” a tactical means of collective empowerment in a hostile society.

Using futzing as a methodology, we started exploring genAI, drawing on our curiosity to see what might happen by playing, testing and responding in real time.

Futz first, then kibbitz

Each of us futzed on our own at first, with no ambition to crack the code or reverse-engineer the algorithm. Later, when we began kibbitzing together, we realized our scattered efforts were actually circling around shared concerns. Futzing helped us see patterns, surprises and contradictions — things we might have missed with a more rigid approach. Kibbitzing helped us connect those patterns and reconcile the contradictions.

Drawing on our culture this way allows us to imagine inclusive, anti-oppressive Jewish epistemologies that respond to the complexity of the current political moment. — like — is porous and resistant to fixed form. Our shared North American identity is just one of many possible perspectives that comprise a broader identity of Jewishness.

That is not a problem to be solved. Rather, it is a strength and a bond between us. Readers may well see their own cultural traditions, vernaculars and ancestral practices in this light too, as techniques of resilience and joy in the face of hardship and oppression.

There is an irony here. The deeper we dig into the intellectual roots of our own culture, the more common ground we might discover with everyone else’s. And that makes us feel a whole lot safer than getting a text from the EEOC ever could.

By Assistant Professor , Communication and Media Studies, 첥Ƶ; Professor of Communication Studies , American University School of Communication; and Assistant Professor , Communications, Felician University

The post Experimenting with generative AI to kibbitz and futz towards more inclusive futures appeared first on News@York.

]]>
Canada is lagging in innovation, and that’s a problem for funding the programs we care about /news/2025/04/15/canada-is-lagging-in-innovation-and-thats-a-problem-for-funding-the-programs-we-care-about/ Tue, 15 Apr 2025 18:29:11 +0000 /news/?p=22050 As Canadians prepare to vote in another federal election, the country’s economy faces a sobering reality. As the Organization for Economic Co-operation and Development (OECD) notes, productivity is stagnating, our innovation performance lags global peers and high-potential startups often fail to scale.

The post Canada is lagging in innovation, and that’s a problem for funding the programs we care about appeared first on News@York.

]]>

As Canadians prepare to vote in another federal election, the country’s economy faces a sobering reality. As the (OECD) notes, productivity is stagnating, our innovation performance lags global peers and high-potential startups often fail to scale.

Despite these warning signs, innovation policy remains largely absent from political discourse. Canadians hear a great deal about how political parties are going to spend money, but .

This is a critical oversight. Canada’s enduring productivity gap is the social programs, such as health care and education, that Canadians value.

If Canadians want to maintain their standard of living, Canada must close that gap through a more deliberate, strategic approach to innovation.

Innovation is economic strategy

In today’s knowledge-based economy, as business executive , power flows to countries that own digital data and their “value-added applications” (like apps or platforms) and intellectual property.

Countries like , have embedded innovation into national strategy, investing in sectors like artificial intelligence (AI), clean technology and biotech to drive growth and resilience. Canada, by contrast, has taken a fragmented, reactive approach.

Canada’s over-reliance on research and development (R&D) spending and patent counts has failed to translate into commercial success. According to the OECD, such as productivity growth and technology adoption.

Canada also often conflates research with innovation. While both are vital, innovation is about turning knowledge into use through deployment, adoption, commercialization and scaling. Much of today’s transformative innovation, particularly in AI and software, (related to things like user insights, execution experience and expertise in a particular domain) not just codified knowledge (for example, patents, technical drawings and licenses).

Why innovation policy fails

Governments struggle with innovation because it defies conventional policymaking:

  • It requires failure tolerance. Innovation is iterative. But political systems fear failure.
  • It demands long-term vision. Results may take years, beyond typical electoral cycles.
  • It’s technically complex. Few policymakers have deep expertise in emerging technologies or understand the research and development process.
  • It’s often misunderstood. Funding research is not the same as building innovation capacity or developing innovation processes.
  • It’s hard to quantify. Quantifying innovation outcomes is complex and challenging to measure, making it also difficult to measure return.

As economist and innovation policy expert Mariana Mazzucato argued in The Entrepreneurial State: Debunking Public vs. Private Sector Myths, from failure. Canada’s current model lacks these ingredients.

Breaking the cycle of failure

To break this cycle, Canada needs a non-partisan national innovation institution — an agency empowered to advise on strategy, evaluate outcomes and embed technical expertise into policy at the federal, provincial and municipal levels.

Models like from the U.S., from Sweden and the show how long-term, high-impact innovation can be achieved with the right institutional scaffolding and appropriate knowledge.

Canadians have created a number of innovation organizations with national implications, such as the , the , and the , which closed in 2019.

Yet none have been national organizations that addressed the broad proposed mandate to explicitly advise governments on technology and policy strategy, evaluate innovation outcomes and embed technical expertise into recommendations.

A non-partisan national innovation institution must:

  1. Track outcomes more than inputs. Innovation success can be measured by a number of project- or industry-specific outcomes, such as productivity, firm growth and export revenue. The ICP proposed measuring the comparing innovation performance to peer jurisdictions.
  2. Support long-term strategic objectives, focusing on Canada’s strengths in critical areas like AI, clean technology, energy health-care technology, and leveraging expertise and experience in these and other areas.
  3. Embed technology experts alongside health-care and education experts in the decision-making process. Recruit scientists, engineers and entrepreneurs to anticipate technology and market trends, guiding both implementation and policy development.
  4. Differentiate innovation from research. Support both, but recognize the differences and explicitly link innovation to adoption and new use cases.
  5. Promote value capture. Ensure Canadian firms and the country benefit from and retain control of key technologies that enable them to scale domestically.
  6. Recognize the inherent risks in innovation and the potential for failure. Evaluate and build on impact and learn from failure to enhance innovation processes and improve future outcomes.
  7. Align our educational institutions with innovation goals revising programs, creating more flexible learning options so that more research outcomes .

These steps aren’t hypothetical. They’re backed by evidence .

Why now?

Canada’s economy is and vulnerable to technological disruption. Meanwhile, the global AI and clean tech races are accelerating. Canada is at risk of falling further behind — not just economically, but geopolitically.

But Canada also has strengths: world-class researchers, diverse entrepreneurial talent and global partnerships. What’s missing is a cohesive national strategy to harness this potential. Creating a non-partisan innovation institution would be a powerful first step.

If Canadians want to provide revenue for governments decide how to fund education, health care and climate adaptation, they must grow their economy. And to do that, Canada needs smarter innovation policy.

It’s time to stop celebrating activity and start rewarding outcomes. Let’s build the structures that allow Canadian ingenuity to thrive — not in theory, but in practice.

By Bergeron Chair in Technology Entrepreneurship Andrew Maxwell, Lassonde School of Engineering, 첥Ƶ

The post Canada is lagging in innovation, and that’s a problem for funding the programs we care about appeared first on News@York.

]]>
AI could help solve questions about health of sleeping on back after 28-weeks pregnancy /news/2024/12/06/ai-could-help-solve-questions-about-health-of-sleeping-on-back-after-28-weeks-pregnancy/ Fri, 06 Dec 2024 14:37:00 +0000 /news/?p=21394 The importance of sleep for improving brain function, mood, cardiovascular health, metabolic health and staving off dementia has emerged as a hot topic, but for pregnant women, sleep position may also prove critical for giving birth to a healthy child. Although there is a known association between back sleeping after 28 weeks of pregnancy and […]

The post AI could help solve questions about health of sleeping on back after 28-weeks pregnancy appeared first on News@York.

]]>

The importance of sleep for improving brain function, mood, cardiovascular health, metabolic health and staving off dementia has emerged as a hot topic, but for pregnant women, sleep position may also prove critical for giving birth to a healthy child.

Although there is a known association between back sleeping after 28 weeks of pregnancy and low-weight infants and stillbirth, the casual proof is difficult to ascertain, and some experts doubt the connection altogether. However, 첥Ƶ’s and team have developed a computer vision tool that can identify various sleep postures during pregnancy and potentially lead to more definitive answers in the future.

An assistant professor in York’s School of Health Policy and Management, Faculty of Health, and a member of York’s Centre for AI and Society and Connected Minds, Dolatabadi is interested in developing artificial intelligence and machine learning solutions to health issues. She spent much of her early years developing sensing technologies, also called ambient intelligence, and was in search of a real-world issue when her former student Allan Kember of the University of Toronto, now an obstetrician/gynecologist at Mount Sinai Hospital, suggested maternal sleeping posture.

“These types of technologies that I’ve worked on can be embedded in the environment and the maternal sleeping posture and poor infant outcomes is a modifiable risk factor,” she says.

Along with the Vector Institute, where she was formerly an applied scientist and health lead, Dolatabadi and team have come out with the second version of an AI tool designed to monitor the sleeping positions of pregnant women throughout the night.

“We've developed a vision-based tool that automatically detects the sleep postures of pregnant women using video recordings. This tool is part of a larger initiative aimed at creating unobtrusive and affordable health sensing technologies,” says Dolatabadi. “What’s remarkable is that the tool can accurately detect common sleep postures, such as side and back sleeping, even in real-world settings with blankets, pillows, and more than one person in bed.”

The research team used video from an observational, four-night, home sleep apnea study with 15pregnant participants and 13 bed partners along with controlled-setting video recordings from a previous in-home, simulation study where 26 participants simulated a series of 12 pre-defined body postures. Pregnant participants were between 28- and 40-weeks gestation from across Canada.

The data was combined and used to train and test the tool, which was able to detect 13 pre-defined sleeping positions, including sitting. Although it did better at , it learned to distinguish the anatomy and physiology of a pregnant person compared to their non-pregnant partner – or, in one case, the pets which were also in the bed – as well as pelvis position, says Dolatabadi, adding these were some of the complexities involved in training the AI tool.

Pregnant woman and her partner sleeping in bed
An example of how the study's AI tool captured the postures of a sleeping pregnant woman and her partner

Sleep positions included supine pelvis with left or right thorax tilts, supine thorax with right or left pelvic tilts, and prone. It was also able to detect pillows and blankets.

In real-world applications, the tool will allow for the study of pregnant women in a natural setting, providing more, higher quality and more accurate information than is currently available as it won’t be relying on self reports and someone’s memory of their sleeping position.

As Dolatabadi says, “This technology has the potential to answer how sleep positions may affect pregnancy outcomes, including factors like baby size and stillbirth risk, and solve the controversy and uncertainty about the association between the supine sleeping posture and outcomes.”

In the future, the team plans to refine the tool with plans to make it multi-modal. This would involve training it to monitor and predict heart rate and other vitals using just a camera or phone. It is possible to predict things like heart rate using vision tools without the need for more invasive monitoring sensors, she says.

This would not only provide more robust data to assess the risk of sleep position but also to predict the risk of sleep apnea without people having to go to a sleep clinic.

“The more we can infuse into the model, the better outcome that we can get,” says Dolatabadi.

Going forward, Dolatabadi is interested in further evaluating generative AI to ensure models are not propagating or creating inequities in health-care environments.

“It’s important to evaluate health AI applications as they are susceptible to biases and harm, whether it’s medical diagnostics or administrative billing for hospital services. There are models of this nature that have been integrated into health-care workflow in U.S. hospitals and it's been shown that they have created some unintended inequities for some patient groups,” says Dolatabadi.

“Embedded biases in health-care practices, reflected in the data used for model training, algorithmic decision-making, and the extent to which social determinants of health are considered in model design, can lead to unintended harm and exacerbate existing disparities. As health-care settings increasingly adopt these powerful AI models, now is the time to ensure they are responsible.”

The paper, , was published in Scientific Reports.

The post AI could help solve questions about health of sleeping on back after 28-weeks pregnancy appeared first on News@York.

]]>
첥Ƶ program helps fund 16 Global South health-care hubs to combat infectious diseases /news/2023/09/12/york-u-program-helps-fund-16-global-south-health-care-hubs-to-combat-infectious-diseases/ Tue, 12 Sep 2023 13:21:23 +0000 /news/?p=18102 A 첥Ƶ-led program is helping bolster health care in the Global South by providing more than $5.8 million in funding for 16 projects in as many countries, including polio surveillance in Ethiopia and helping Indigenous communities in the Philippines.

The post 첥Ƶ program helps fund 16 Global South health-care hubs to combat infectious diseases appeared first on News@York.

]]>

York’s call for proposals receives overwhelming response from Africa, Asia and Latin America to create AI solutions to fight new and re-emerging infectious diseases

TORONTO, Sept. 12, 2023 – A 첥Ƶ-led program is helping bolster health care in the Global South by providing more than $5.8 million in funding for 16 projects in as many countries, including polio surveillance in Ethiopia and helping Indigenous communities in the Philippines.

“We have led the call to strengthen the health-care system in low- and medium-income countries (LMIC) in the Global South for more than a year now,” says 첥Ƶ Assistant Professor , executive director of the (AI4PEP). Originally from Cameroon, Kong understands the strains faced by health-care systems in LMIC and the importance of southern-led solutions.

“Funding these projects will help strengthen capacity and support prevention, early detection, preparedness, mitigation and control of emerging or re-emerging infectious disease outbreaks in LMIC countries in Africa, South Asia, Southeast Asia, Latin America, the Caribbean and the Middle East, which, as we know, can make their way to every country in the world.”

Map of where 16 hubs are
Map shows locations of 16 hubs receiving funding from AI4PEP

Incidents of disease outbreaks are expected to increase in severity and frequency as more viruses, bacteria and parasites jump from animals to people.

AI4PEP received $7.25 million in funding from the International Development Research Centre in 2022 to develop a multi-regional, interdisciplinary network to use AI and big data to improve public health preparedness and response, and promote equitable and ethical solutions.

After a recent call for proposals, the team received 221 submissions from 47 countries with 142 of them from Africa, 40 from Asia, 26 from Latin America. The overall program framework centers around a gender, equity, inclusion and decolonization lens.

“AI4PEP at 첥Ƶ is deepening the understanding of how equitable and responsibly designed artificial intelligence can lead to southern-led solutions to strengthen public health-care systems in the Global South,” says Vinitha Gengatharan, assistant vice-president, global engagement and partnerships. “This is just the start of a growing, multi-regional network to improve and strengthen public health preparedness and response to disease outbreaks that can make a real difference in the lives of people.”

The projects are led by universities in collaboration with health-care system stakeholders in 16 regions of the Global South. They include AI and modelling for community-based detection of zoonotic disease with increasing climate change in Senegal; a Foundation for Medical Research-University of Mumbai project; an AI-powered early detection system for communicable respiratory diseases based on integrated data sets at Wits University in South Africa; an Al-Quds University project; and an AI and eco-epidemiology-based early warning systems to improve public health response to mosquito-borne viruses in the Dominican Republic. .

Jude Kong
Jude Kong

“The funding for our project, named AutoAI-Pandemics, will enable the development of a cutting-edge and user-friendly platform, driven by responsible artificial intelligence practices, to deal with the challenges of infectious diseases, in particular, control of epidemics and pandemics. Current advances in artificial intelligence have resulted in robust solutions for epidemiological analysis, bioinformatics, and misinformation detection, while actively combating biases and ensuring fairness,” says Professor André C. Ponce de Leon F. de Carvalho of the Institute of Mathematics and Computer Sciences, University of São Paulo at São Carlos, Brazil.

“Thanks to this funding, we have the opportunity to contribute to the efforts to fight epidemics and improve human health. By collectively fortifying our defenses against infectious diseases, we can make a lasting impact on global health with increasing equity and equality. We also believe that this research will bring relevant scientific contributions in the areas of artificial intelligence and bioinformatics.”

As diseases increasingly spreading from animals to people with continued human encroachment into natural landscapes, AI4PEP’s One Health concept is designed to recognize and respond to the reality that human health is interdependent with the health of animals and the environment. Climate change is another huge factor.

Hub descriptions
Projects being funded by AI4PEP

“Climate change is exacerbating existing health and social inequities by increasing the vulnerability of climate hotspots to the emergence and re-emergence of many infectious diseases, such as malaria, dengue fever and zika,” says Associate Professor of the Faculty of Liberal Arts and Professional Studies. “This is a huge initiative, but with the support of many of York’s research institutes, including the York Emergency Mitigation, Engagement, Response and Governance Institute directed by Distinguished Research Professor , as well as CIFAL and the Dahdaleh Institute for Global Health Research, I believe we can all collaborate with this exceptional global network to respond to the increasing threat of infectious diseases.”

AI solutions and data science approaches are increasingly being used across the globe to identify risks, conduct predictive modelling, and provide evidence-based recommendations for public health policy and action. 

“Responding to the complex nature of these interactions in a timely way requires the ability to analyze large data sets across multiple sectors,” says Kong of the Faculty of Science and director of the Africa-Canada Artificial Intelligence and Data Innovation Consortium.

But even with the promised good of these innovative tools to improve public health outcomes, the team recognizes there are important ethical, legal, and social implications that, if not appropriately managed and governed, can translate into significant risks to individuals and populations. AI4PEP intends to deepen the understanding of designing responsible AI solutions.

“Responsible AI entails intentional design to enhance health equity and gender equality and avoid amplifying existing inequalities and biases. We are working toward the realization of the United Nation’s Sustainable Development Goals, in particular, three and five – good health and well-being, and gender equality,” says Kong. “Colonialism and gendered oppression have enduring effects, disproportionately impacting the health and quality of life of formerly colonized people and vulnerable groups, including women, gender non-conforming people, people with disabilities, rural communities, and low-income households.”

Projects within the initiative will work closely with governments, public health agencies, civil society and others to generate new knowledge and collaborations to inform practice and policies at subnational, national, regional and global levels. 

About 첥Ƶ

첥Ƶ is a modern, multi-campus, urban university located in Toronto, Ontario. Backed by a diverse group of students, faculty, staff, alumni and partners, we bring a uniquely global perspective to help solve societal challenges, drive positive change, and prepare our students for success. York's fully bilingual Glendon Campus is home to Southern Ontario's Centre of Excellence for French Language and Bilingual Postsecondary Education. York’s campuses in Costa Rica and India offer students exceptional transnational learning opportunities and innovative programs. Together, we can make things right for our communities, our planet, and our future.

Media Contact: Sandra McLean, 첥Ƶ Media Relations, 416-272-6317, sandramc@yorku.ca 

The post 첥Ƶ program helps fund 16 Global South health-care hubs to combat infectious diseases appeared first on News@York.

]]>
첥Ƶ led Connected Minds explores how new technologies affect our brains, society and the most vulnerable /news/2023/05/19/york-u-led-connected-minds-explores-how-new-technologies-affecting-our-brains-society-and-the-most-vulnerable/ Fri, 19 May 2023 17:43:40 +0000 /news/?p=17157 The post 첥Ƶ led Connected Minds explores how new technologies affect our brains, society and the most vulnerable appeared first on News@York.

]]>

The post 첥Ƶ led Connected Minds explores how new technologies affect our brains, society and the most vulnerable appeared first on News@York.

]]>