<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
<div>Due to unforeseen circumstances, the Early Career Talk by <span style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0);" class="">Dr. Anindita Maiti is cancelled today. </span></div>
<div><br class="">
</div>
<div><span style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0);" class="">Sorry for the inconvenience,</span></div>
<div><font color="#000000" class=""><span style="caret-color: rgb(0, 0, 0);" class="">Douglas (on behalf of the Early Career Talks organizing committee)</span></font></div>
<div><br class="">
</div>
<div>
<blockquote type="cite" class="">
<div class=""><br class="">
______________________________<br class="">
<br class="">
Detailed information available can be found at <a href="x-msg://39/'https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures'" class="">
https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures</a>
<br class="">
<br class="">
Date/Time: Mon Feb 10 2025 at 10:00<br class="">
<br class="">
Location: Theory Room/Remote<br class="">
<br class="">
Speaker: Anindita Maiti (Perimeter Institute)<br class="">
<br class="">
Title: A Wilsonian RG framework for Regression Tasks in Supervised Learning (Early Career Talks)<br class="">
<br class="">
Abstract: The performance of machine learning (ML) models fundamentally hinges on their ability to discriminate between relevant and irrelevant features in data. We introduce a first-of-its-kind Wilsonian RG framework to analyze the predictions of overparameterized
neural networks (NN), which are models characterized by an excess of parameters relative to the complexity of the task. These networks, trained via supervised learning, are known to produce noisy outputs in regression tasks. In our formulation, irrelevant
features within the data are systematically coarse-grained through momentum shell RG, inducing an RG flow that governs the evolution of noise in the predictions. When the irrelevant features follow a Gaussian distribution, this RG flow exhibits universality
across different NN architectures. In contrast, non-Gaussian features give rise to more intricate, data-dependent RG flows. This approach reveals novel behaviors in NNs that have eluded conventional ML methods. By advancing beyond philosophical analogies between
RG and ML, our framework offers a field theory-based methodology for understanding feature learning. This talk is based on the paper
<a href="https://arxiv.org/abs/2405.06008" class="">https://arxiv.org/abs/2405.06008</a>. Remote access:
<a href="https://ubc.zoom.us/j/61921073667?pwd=4Drp97meGJ3yq4Ro6k6LaoDncvJXaS.1" class="">
https://ubc.zoom.us/j/61921073667?pwd=4Drp97meGJ3yq4Ro6k6LaoDncvJXaS.1</a> Meeting ID: 619 2107 3667 Passcode: 609573 Coffee and cookies available 15min before. BYO mug/cup.<br class="">
<br class="">
Early Career Talks<br class="">
<br class="">
______________________________<br class="">
<br class="">
Detailed information available can be found at <a href="x-msg://39/'https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures'" class="">
https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures</a>
<br class="">
<br class="">
_______________________________________________<br class="">
Triumf-seminars mailing list<br class="">
<a href="mailto:Triumf-seminars@lists.triumf.ca" class="">Triumf-seminars@lists.triumf.ca</a><br class="">
https://lists.triumf.ca/mailman/listinfo/triumf-seminars<br class="">
</div>
</blockquote>
</div>
<br class="">
</body>
</html>