Date/Time: Thu Feb 13 2025 at 14:00<br/><br/>Location: Auditorium/ Hybrid<br/><br/>Speaker: Katelin Schutz (McGill)<br/><br/>Title: Revealing hidden particles and forces with gravitational clues<br/><br/>Abstract: How different would our Universe look with the addition of extra particles and forces beyond what we know? We already have ample gravitational evidence for at least one invisible component of matter that has properties unlike anything we have previously discovered. This "dark matter" is often assumed to be made of a single species of relatively inert particles but there is a much richer range of possibilities, including scenarios where dark matter is part of a ?dark sector? including other auxiliary particles and forces. If there are dark forces affecting the distribution of dark matter in our Universe, then that distribution will gravitationally affect the visible matter that we can see. In this colloquium I will show how this gravitational footprint can reveal the behavior and properties of dark sectors. I will emphasize the constraining power of diverse astrophysical systems for these purposes, including the Sun, the local Milky Way, nearby dwarf galaxies, distant galaxies and galaxy clusters, large-scale cosmological structure, and the cosmic microwave background.
---
Remote Connection:
https://ubc.zoom.us/j/66051729340?pwd=51ZIwlJk5cPf2d3HI6kO4XaP7SNxqS.1
Meeting ID: 660 5172 9340
Passcode: 501242
<br/><br/>Refreshments available 15min before the colloquium. BYOM- Bring your own mug!<br/><br/>______________________________<br/><br/>Detailed information available can be found at <a href='https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures'>https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures</a> <br/><br/>Date/Time: Mon Feb 10 2025 at 10:00<br/><br/>Location: Theory Room/Remote<br/><br/>Speaker: Anindita Maiti (Perimeter Institute)<br/><br/>Title: A Wilsonian RG framework for Regression Tasks in Supervised Learning (Early Career Talks)<br/><br/>Abstract: The performance of machine learning (ML) models fundamentally hinges on their ability to discriminate between relevant and irrelevant features in data. We introduce a first-of-its-kind Wilsonian RG framework to analyze the predictions of overparameterized neural networks (NN), which are models characterized by an excess of parameters relative to the complexity of the task. These networks, trained via supervised learning, are known to produce noisy outputs in regression tasks. In our formulation, irrelevant features within the data are systematically coarse-grained through momentum shell RG, inducing an RG flow that governs the evolution of noise in the predictions. When the irrelevant features follow a Gaussian distribution, this RG flow exhibits universality across different NN architectures. In contrast, non-Gaussian features give rise to more intricate, data-dependent RG flows. This approach reveals novel behaviors in NNs that have eluded conventional ML methods. By advancing beyond philosophical analogies between RG and ML, our framework offers a field theory-based methodology for understanding feature learning. This talk is based on the paper https://arxiv.org/abs/2405.06008.
Remote access: https://ubc.zoom.us/j/61921073667?pwd=4Drp97meGJ3yq4Ro6k6LaoDncvJXaS.1
Meeting ID: 619 2107 3667
Passcode: 609573
Coffee and cookies available 15min before. BYO mug/cup.<br/><br/>Early Career Talks<br/><br/>______________________________<br/><br/>Detailed information available can be found at <a href='https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures'>https://www.triumf.ca/research-program/lectures-conferences/upcoming-seminars-lectures</a> <br/><br/>