News from Division of Biology and Biological Engineeringhttps://www.bbe.caltech.edu/news2024-03-20T21:39:00+00:00Copyright © 2024 California Institute of TechnologyWhen Does the Brain Process Reward and Risk?2024-03-20T21:39:00+00:00Cynthia Ellerceller@caltech.eduhttps://divisions.caltech.edu/newspage-index/when-does-the-brain-process-reward-and-risk<p data-block-key="7idkv">Imagine that you are considering buying stock in a company. You know what its current value is, and you suspect that you could make a healthy return on your investment. But this stock is very volatile: some days up, some days down. Yes, you could make a lot of money, but you could also lose a lot of money. There is a clear reward, but also a lot of risk.</p><p data-block-key="5jsra">Many decisions are like this. The can of tomato paste on clearance at the grocery store is a fantastic bargain if it has not gone bad, but if it has, you have thrown away your money.</p><p data-block-key="6q43n">Decisions like these are a classic situation considered by economists. New research from the lab of John O'Doherty, Caltech's Fletcher Jones Professor of Decision Neuroscience and an affiliate faculty member of the <a href="https://neuroscience.caltech.edu/">Tianqiao and Chrissy Chen Institute for Neuroscience</a>, aims to understand how the brain implements these kinds of decisions by testing a computational model that proposes how representations of reward and risk are built from experience. The neural processing of reward and risk was <a href="https://www.jneurosci.org/content/28/11/2745.short">previously studied at Caltech</a> via a technique called functional magnetic resonance imaging (fMRI), which measures changes in blood flow inside the brain. Researchers found that a region of the brain called the anterior insula is activated when people assess risk and process uncertainty.</p><p data-block-key="7i9v6">In a new study, electrodes implanted deep within the brains of patients (for unrelated therapeutic purposes) allowed O'Doherty and his team to obtain even more precise measurements of brain activity during decision-making. The work revealed that, as expected, the so-called reward prediction error (the difference between the expected value and the observed value) appears first and is followed by the risk prediction error (the difference between the expected uncertainty and the actual uncertainty), which relied on the same neural processes as the reward prediction error. Both signals were found in the anterior insula. These findings suggest that the reward prediction error is used to calculate the risk prediction error, which can then be used to learn to assess riskiness, which is a necessary guide to decision-making.</p><p data-block-key="2ttt2">The work was published in the March 9, 2024, issue of <i>Nature Communications</i>.</p><p data-block-key="56q40">Vincent Man, a senior postdoctoral scholar research associate in neuroscience and a co-author of the paper, explains: "fMRI is great at telling us where in the brain something is happening, but it is limited in terms of telling us <i>when</i> things happen, at least on the fast timescales at which we think these neural processes unfold."</p><p data-block-key="a8m9j">For this study, patients being evaluated for epilepsy were recruited at the University of Iowa Hospitals and Clinics. To monitor their seizure activity, these individuals had electrodes implanted deep in key regions of their brain, including in the anterior insula, which allowed researchers to detect neural activity at a microsecond timescale that is not possible with fMRI.</p><p data-block-key="a06hc">Then, the participants played a very simple card game using 10 playing cards numbered from ace to 10, with the ace counting as one. They were asked to predict, sight unseen, if the second card would be higher or lower in value than the first card. Since neither card was visible, this was always a completely random guess. After the first card was shown, participants would get some information about how accurate their guess might be. For example, if they predicted that the second card would be lower, and the first card was a 10, they would know immediately that their guess was correct. If the first card was an ace, they would know they were wrong. But if the first card was a five, the outcome remained uncertain until the second card was revealed.</p><p data-block-key="1chcm">"Basically, with this game we are drawing an arc from no uncertainty to maximal uncertainty," explains Man, who works in O'Doherty's lab. "The computational model predicts that you make one computation and form an expectation about risk. When you see card two, there is a second computation to assess the expected risk." The computations used to make these predictions are identified as the reward prediction error (RePE)—the process of updating between an expected reward and an observed reward (the actual card drawn), and the risk prediction error (RiPE)—the process of assessing the expected risk with respect to the observed risk.</p><p data-block-key="cjr4k">Activity detected in the anterior insula during these games showed exactly this two-step process following the display of card two: reward prediction evaluation first, followed by risk prediction error evaluation.</p><p data-block-key="8rhae">"We're validating a theoretical idea about the relationship between reward and risk and how they relate to each other," Man says. "The fact that the neural signature is consistent with the theory is nice; it grounds the theory."</p><p data-block-key="7q5as">O'Doherty adds: "Determining how the brain generates these kinds of computations can help us ultimately build more accurate models of how the brain learns and make decisions, which could be useful not only for understanding how the brain works in general, but also, potentially, for understanding how these processes might go wrong in diseases such as problem gambling, addiction, or other psychiatric disorders."</p><p data-block-key="9t91e">The <a href="https://www.nature.com/articles/s41467-024-46094-1">paper</a> is titled "Temporally Organized Representation of Reward and Risk in the Human Brain<i>.</i>" The authors are Man, O'Doherty, and Jeffrey Cockburn of Caltech; Oliver Flouty of the University of South Florida; and Phillip E. Gander, Masahiro Sawada, Christopher K. Kovach, Hiroto Kawasaki, Hiroyuki Oya, and Matthew A. Howard III of the University of Iowa.</p>Senior Named Gates Scholar2024-03-11T16:38:58.078476+00:00Lori Dajoseldajose@caltech.eduhttps://divisions.caltech.edu/newspage-index/senior-named-gates-scholar<p data-block-key="2gx6k">Senior Lily DeBell has been named to the 2024 class of Gates Cambridge Scholars at the University of Cambridge. The Gates Cambridge Scholarship program, established in 2000 through a donation to Cambridge University from the Bill and Melinda Gates Foundation, recognizes young people from around the world who not only excel academically but display a commitment to social issues and bettering the world.</p><p data-block-key="vkh1">After graduating from Caltech this spring with her BS in biology, DeBell will complete an MPhil in biological science at Cambridge, where she will study how certain proteins guide the deadenylation of messenger RNAs (mRNAs). Deadenylation is a process in which a particular part of an RNA, called its poly(A) tail, is cut off in order to degrade it and stop gene expression. Through the process, cells can rapidly shift their gene expression.</p><p data-block-key="b3jqo">"This project will promote advances in medicine and human health by establishing a basis for future research on new tools to manage gene expression during disease," DeBell says. "I am pleased to be part of the Gates Cambridge program and look forward to working alongside students performing cutting-edge research to improve the human condition."</p><p data-block-key="eo0d4">At Caltech, DeBell conducts research in the laboratory of Bil Clemons, Arthur and Marian Hanisch Memorial Professor of Biochemistry, where she analyzes membrane protein evolution. Membrane proteins have promising potential as drug targets because therapeutics directed to these proteins do not need to permeate the cell to exert an effect on cellular processes. </p><p data-block-key="6nr07">In addition to her research, DeBell, a native of Baltimore, Maryland, serves as a peer academic coach in chemistry and biology, volunteers with the RISE tutoring program, and sings in the Caltech Glee Club.</p>Mutant Newts Can Regenerate Previously Defective Limbs2024-03-07T04:14:40.910457+00:00Lori Dajoseldajose@caltech.eduhttps://divisions.caltech.edu/newspage-index/mutant-newts-can-regenerate-previously-defective-limbs<p data-block-key="qvfes">Many salamanders have the remarkable ability to regrow their own limbs and tails after an injury. How are they able to do this, while more complex mammals, such as humans, cannot?</p><p data-block-key="8au6c">"Certain animals like zebrafish and salamanders are able to regenerate body parts, but higher up on the evolutionary tree of life, regeneration happens much more rarely," says <a href="https://www.bbe.caltech.edu/people/marianne-bronner">Marianne Bronner</a>, the Edward B. Lewis Professor of Biology and director of the Beckman Institute at Caltech. "Though we've seen that some human babies can actually regenerate the tips of their fingers, this ability does not persist through adulthood. We want to understand the molecular processes that underlie regeneration."</p><p data-block-key="5b8sq">Usually, the only time an animal grows a limb is during embryonic development, which has led researchers to theorize that the processes guiding development and regeneration are similar. However, a new collaborative study by the Bronner Lab at Caltech and the laboratory of Ken-ichi T. Suzuki of the National Institute for Basic Biology in Japan shows that a particular molecule necessary for proper development is <i>not</i> needed for regeneration.</p><p data-block-key="dg0ef">A paper describing the research appears in the journal <i>Proceedings of the National Academy of Sciences</i> on March 5.</p><p data-block-key="f3geb">Led by postdoctoral scholar Miyuki Suzuki, the new study uses the newt <i>Pleurodeles waltl</i>, an amphibian commonly known as the Iberian ribbed newt, to examine the molecule FGF10 (fibroblast growth factor 10). FGF10 is known to play a major role in guiding the cellular development of the animal's limbs during the embryonic stage.</p><p data-block-key="6p3e0">As adults, <i>Pleurodeles waltl</i> have robust regenerative abilities. If a limb is severed, regardless of where along its length the cut is made, the animal will grow back the proper structures—bone, muscle, nerves, and so on—as if nothing had happened. This newt even has the ability to regenerate parts of its heart, making it a good candidate for studying how regenerative processes work.</p><p data-block-key="bicsk">In the new study, Miyuki Suzuki created a genetic line of newts that lacked FGF10. Without this molecule, these animals developed defective back legs that were often severely stunted with missing digits. However, when those back legs were amputated, the newts were able to grow back fully formed legs, suggesting that regeneration and development may be guided by different processes.</p><p data-block-key="99e1m">The paper is titled <a href="https://www.pnas.org/doi/10.1073/pnas.2314911121">"<i>Fgf10</i> mutant newts regenerate normal hindlimbs despite severe developmental defects."</a> Miyuki Suzuki is the study's first author. In addition to Suzuki and Bronner, co-authors are Akinori Okumura, Akane Chihara, Yuki Shibata, Machiko Teramoto, Kiyokazu Agata, and Ken-ichi T. Suzuki of the National Institute for Basic Biology in Japan; and Tetsuya Endo of Aichi Gakuin University in Japan. Funding was provided by the Japan Science and Technology Agency, Japan Society for the Promotion of Science, Human Frontier Science Program Organization, and the National Institutes of Health. Marianne Bronner is an affiliated faculty member with the <a href="https://neuroscience.caltech.edu">Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech</a>.</p>Paul Sternberg Receives Prestigious Genetics Society Award2024-02-21T19:59:39.951768+00:00Lori Dajoseldajose@caltech.eduhttps://divisions.caltech.edu/newspage-index/paul-sternberg-receives-prestigious-genetics-society-award<p data-block-key="ibgn3"><a href="https://www.bbe.caltech.edu/people/paul-w-sternberg">Paul Sternberg</a>, the Bren Professor of Biology, has received the Thomas Hunt Morgan Medal from The Genetics Society of America (GSA). The award, given for lifetime contributions to the field of genetics, is named after Nobel Laureate <a href="https://www.caltech.edu/map/milestone/59">Thomas Hunt Morgan</a>, who founded the Division of Biology at Caltech (now the Division of Biology and Biological Engineering) in 1928.</p><p data-block-key="8t9tv">Throughout his career, Sternberg's research has utilized the model organism <a href="https://www.caltech.edu/about/news/survival-mode-tiny-worms-brain-81107"><i>Caenorhabditis elegans</i></a>, a nematode or roundworm, to make advances in genetics, developmental biology, evolution, neuroscience, and disease research. He has taken a leading role in developing information resources such as WormBase for <i>C. elegans</i>, the Alliance of Genome Resources, and the Gene Ontology Consortium; he co-founded microPublication Biology, a short-format, peer-reviewed journal that seeks to make scholarly communication effective and compatible with knowledge bases. The GSA recognizes Sternberg for "his lifelong commitment to the open sharing of data across biomedical research."</p><p data-block-key="3c7t">"I am truly honored to be recognized in this way by my peer geneticists," Sternberg says. "Genetics has always been my favorite way of reverse engineering complex living systems because of its combination of the elegant and the practical."</p><p data-block-key="5jh4g">Sternberg joined the Caltech faculty in 1987. He is an affiliated faculty member with the <a href="http://neuroscience.caltech.edu">Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech</a>.</p><p data-block-key="6dtja">Several former Caltech faculty members have previously won the award, including Norman Horowitz in 1998, Ed Lewis in 1983, George Beadle in 1984, Ray Owen in 1993, and Seymour Benzer in 1986. Caltech alumni have also received the award, including <a href="https://magazine.caltech.edu/post/the-most-beautiful-experiment">Matthew Meselson (PhD '57) in 1995, former postdoctoral scholar Franklin Stahl in 1996</a>, Ira Herskowitz (BS '67 in 2002, and David Hogness (BS '49, PhD '53) in 2003.</p>Measuring Stress2024-01-22T04:24:06.847003+00:00Cynthia Ellerceller@caltech.eduhttps://divisions.caltech.edu/newspage-index/measuring-stress<p data-block-key="7idkv"></p><p data-block-key="c7lsh">In the latest of a series of innovative designs for wearable sensors that use sweat to identify and measure physiological conditions, Caltech's Wei Gao, assistant professor of medical engineering, has devised an "electronic skin" that continuously monitors nine different markers that characterize a stress response. Those wearing this electronic skin—a small, thin adhesive worn on the wrist, called CARES (consolidated artificial-intelligence-reinforced electronic skin)—are free to engage in all their normal daily activities with minimal interference during testing, which allows for the measurement of both baseline and acute levels of stress.</p><p data-block-key="csh3t">Stress is a slippery concept. We talk about "feeling stressed" or a situation "being stressful," and we may attach stress to physical symptoms: "I have a stress headache" or "I'm grinding my teeth at night. It must be stress." The term stress can apply to all sorts of feelings, symptoms, behaviors, and experiences.</p><p data-block-key="947af">Hans Selye, a physician and chemist born in Vienna in 1907, was the first to define stress as a medical condition. Struck by the similar complaints—such as tiredness, low appetite, and lack of motivation—that he heard from patients suffering from very different illnesses, Selye speculated that all of the patients were responding to what they had in common: being sick. He defined stress as a "nonspecific response of the body to any demand."</p><p data-block-key="d7taf">Stress may be experienced positively as excitement or energy, or negatively as shock or anxiety. But however stress may be experienced emotionally, it is now widely agreed that depending on its severity and duration, both acute and chronic stress can damage our physical and mental health, and reduce our ability to function as we would like.</p><p data-block-key="apmm7">Because stress is, as Selye described it, "nonspecific," there is no single biomarker available to tell us definitively whether or how much a person is stressed. However, stress generates a constellation of bodily reactions that, taken together, can provide a measure of stress independent of self-reports. Gao is monitoring this constellation with CARES.</p><p data-block-key="c5oa5">"When a person is under stress, hormones like epinephrine, norepinephrine, and cortisol are released into the bloodstream," explains Gao, who is also an investigator with the Heritage Medical Research Institute and a Ronald and JoAnne Willens Scholar. "Sweat becomes rich with metabolites like glucose, lactate, and uric acid, and electrolytes like sodium, potassium, and ammonium. These are substances we have measured before using <a href="https://www.caltech.edu/about/news/new-wearable-sensor-detects-even-more-compounds-in-human-sweat">microfluidic sampling on a wearable sweat sensor</a>. What is new in CARES is that sweat sensors are integrated with sensors that record pulse waveforms, skin temperature, and galvanic skin response: physiological signals that also indicate stress in predictable ways."</p><p data-block-key="f1m7b">New materials further boost the performance of CARES. Though previously used materials for sweat sensors could be produced efficiently via inkjet printing and were capable of accurate measurement of even very scarce compounds, the materials gradually broke down in the presence of bodily fluids. The introduction of a nickel-based compound helps to stabilize the enzymatic-based sensors, such as those that detect lactate or glucose, as does a new polymer added to the ion-based sensors, which detect biomarkers like sodium or potassium. "Adding these new materials greatly enhances the sensor stability during long-term operation," Gao reports. Like previous sweat sensors, CARES can be battery powered and can wirelessly communicate with a phone or computer via Bluetooth.</p><p data-block-key="brr2v">Another important innovation with CARES is the addition of machine learning. Because stress comes in many different forms and stimulates a complex response affecting many different bodily systems, interpreting a wealth of data accurately is key to the usefulness of CARES and other sensors. Experiments inducing stress in subjects wearing the CARES device demonstrated that the sensor accurately measures the interrelatedness of physiological (such as pulse) and chemical (such as glucose) biomarkers. Subjects also answered questionnaires to self-report their feelings of anxiety and psychological stress before and after exposure to stressful situations like vigorous exercise or intense video gameplay. Data showed clear correlations between self-reports of stress and its physicochemical correlates as measured by CARES.</p><p data-block-key="7h9oa">"High levels of stress and anxiety caused by demanding work environments, such as those experienced by soldiers or astronauts, can significantly affect performance," Gao notes. "Early detection of the severity of stress allows for timely intervention. Our wearable sensor, combined with machine learning, has the potential to provide real-time stress-level insights."</p><p data-block-key="2230l">The paper describing the CARES device, titled "A physicochemical sensing electronic skin for stress response monitoring," appears in the January 19 issue of <i>Nature Electronics.</i> Co-authors are Changhao Xu (MS '20), Yu Song, Juliane R. Sempionatto, Samuel R. Solomon (MS '23), You Yu, Roland Yingjie Tay, Jiahong Li, Wenzheng Heng (MS '23), Jihong Min (MS '19), and Alison Lao of Caltech; Hnin Y. Y. Nyein of Hong Kong University of Science and Technology; and Tzung K. Hsiai and Jennifer A. Sumner of UCLA.</p><p data-block-key="j1sk">Funding for the research was provided by the Translational Research Institute for Space Health through NASA, the Office of Naval Research, the Army Research Office, the National Institutes of Health, the National Science Foundation, the National Academy of Medicine, and the Heritage Medical Research Institute.</p>Molecular Self-Assembly Can "Think" Like a Neural Network2024-01-18T15:49:00+00:00Lori Dajoseldajose@caltech.eduhttps://divisions.caltech.edu/newspage-index/molecular-self-assembly-can-think-like-a-neural-network<p data-block-key="w4594">Sometimes hearing just a few notes of a song is enough to take us back through time to a moment long forgotten. Our brains can reconstruct entire memories through small pieces: Perhaps the scent of a perfume reminds you of your grandmother, or the taste of a casserole reminds you of home. How does this work?</p><p data-block-key="76ieq">The human brain is composed of billions of neurons working collectively. Neurons are like the building blocks of thought, and each one can serve multiple purposes. For example, different memories are encoded by different patterns of activity within the same neurons. The process is similar to how your smartphone screen can display different pictures using the same pixels, or how the same LEGO blocks can be used to construct different objects.</p><p data-block-key="28rda">How neurons do this has been a rapidly developing area of research in recent decades, and sophisticated models of neural networks are now commonplace in digital computers. Perhaps surprisingly, this type of computation is not unique to neurons: The same computational principles can arise in other biological and even purely physical processes. </p><p data-block-key="4cfp2">A new study by researchers at Caltech, the University of Chicago, and Maynooth University in Ireland has now demonstrated how neural-network-like abilities are intrinsic to the natural dynamics of molecules as they self-assemble into structures. The phenomenon is analogous to how neurons work together to recall and reassemble memories, and thus may be considered a form of "associative recall." The research was conducted in the laboratory of <a href="https://www.bbe.caltech.edu/people/erik-winfree">Erik Winfree</a> (PhD '98), professor of computer science, computation and neural systems, and bioengineering; and is described in a paper appearing in the journal <i>Nature</i> on January 18.</p><p data-block-key="42sp0">"The phenomenon of neural-network-like computing arises whenever a set of molecules have the capacity to come together in multiple distinct ways," says Arvind Murugan (BS, MS '04), associate professor of physics at the University of Chicago and co-author of the paper. "In our case, we used short DNA strands in a test tube, but it could have been other kinds of self-assembling molecules. Our study shows that, if certain molecules are more common in a given solution, they can trigger the formation of a 'seed' that subsequently grows into just one of the distinct possible structures—analogous to how a full memory can be formed out of just a 'seed' of recollection."</p><p data-block-key="1na21">To understand what is happening in this test tube full of molecules, imagine a giant swimming pool containing hundreds of LEGO pieces. LEGO pieces can be assembled in many different ways, enabling you to create a car, or a castle, or a caterpillar, all out of the same building blocks. The idea of how self-assembly performs associative recall is: if you give the swimming-pool mixture a 'seed' of a design—say, some pieces already snapped together to create a wheel and windshield—could the rest of the components assemble themselves into the desired final product (in this case, a car)? This is an example of a successful process of associative recall. Or would the pool of blocks assemble into a Frankenstein-like hybrid of partial structures—a car windshield snapped on to half a caterpillar? This scenario would be a failure to recall.</p><p data-block-key="aalh1">In this study, the team designed 917 different molecules, or "molecular tiles," that can be combined to form three different two-dimensional shapes: the letters H, A, or M. (These letters were chosen as a nod to a particular kind of neural network architecture called a Hopfield Associative Memory). As an analogy, imagine a 917-piece jigsaw puzzle that can be put together in three different ways to yield three distinct images.</p><p data-block-key="81sdn">The team put three trillion of these molecules, with relatively equal amounts of each of the 917 variations, into a test tube and observed that the pieces would indeed self-assemble to form many tiny H's, A's, and M's. Though some of the letters only formed partially, there were no accidental hybrids of two or three letters. This was an important first discovery of the study.</p><p data-block-key="acsn4">"This was an example of a molecular system behaving like a neural network: assembling distinct shapes out of the same components, like how the same neurons can encode multiple distinct memories," says Constantine Evans (MS '11, PhD '14), the study's first author.</p><p data-block-key="146up">Then, inspired by how the human brain processes different scents, the team examined what would happen if the test tube contained different concentrations of the molecules. Olfaction, the sense of smell, distinguishes different scents based on the concentration of odor molecules present. The brain can distinguish scents, even if the molecules are the same, because of the differing concentrations.</p><p data-block-key="1buc7">"Classifying concentration patterns is a familiar task to all of us: an "odor" is characterized by a pattern of which molecules are present in high, intermediate, or low concentrations. So, distinguishing grandma's lasagna from a floral bouquet or an oily mechanic's garage is a matter of classifying concentration patterns," Winfree says.</p><p data-block-key="cf3dp">The team wanted to find out the extent to which the self-assembly process acts like a neural network as it classifies the concentration patterns.</p><p data-block-key="fr41i">Of the 917 distinct kinds of molecular tiles, some appeared in all three shapes—the team called these "purple" tiles. Those that were unique to H were called "pink"; to A, "green"; and to M, "blue." Any given purple tile would appear in all three letters, but in different regions of the shape with different neighbors. For example, a cluster of purple tiles may be located together—or "co-localized"—in H, but those same tiles would be scattered throughout A and M. </p><p data-block-key="5j02e">What happens in a tube with an increased concentration of certain purple tiles that are co-localized in one shape—for example, H? Though these tiles are found in A and M, could their co-localization in H create a seed that generates more H tiles than A or M? The team was excited to find that this was indeed true: A high concentration of certain molecules found throughout all shapes but only co-localized in one led to the nucleation of that one shape.</p><p data-block-key="ff3be">"Throughout biology, you find carefully self-assembled structures. But some components are found in multiple structures—for example, the yeast cyclin-dependent kinase Cdc28," Murugan says. "These structures are not always present; they need to come into being at the right times and in the right places, and the kinetics of nucleation is what governs this. So, if biology also exploits the neural-network-like collective modes of nucleation that we demonstrated in this work, then ubiquitous biological self-assembly might be hiding, in plain sight, powerful information processing and decision-making capabilities."</p><p data-block-key="c6c7d">"It's exciting when concepts from one scientific field can, when you look at it right, be seen to appear in a seemingly unrelated field," Murugan adds. "Before our use of co-localization in molecular self-assembly as the principle underlying pattern recognition, a very similar computational architecture was discovered in the brain for how an animal can recognize where it is—the so-called 'place cells' of the hippocampus. Now we are looking for this principle of how nucleation can perform decision-making in other kinds of biomolecular processes, such as multicomponent condensates and genetic regulatory networks." </p><p data-block-key="bmf86">The project builds on several decades of work in the Winfree lab.</p><p data-block-key="6e4ob">Evans says: "What's exciting about DNA nanotechnology is that it's really the only molecular design technology today that allows one to investigate sophisticated theories of molecular computation in the large N limit—here almost a thousand different kinds of molecules all working together. Thankfully, at Caltech, we had access to technology that could automate the experimental preparation of samples with that many components mixed in arbitrary ratios, as well as access to a high-speed atomic force microscope capable of imaging individual molecular assemblies in great detail."</p><p data-block-key="aisfd">"Coming back to Caltech, my alma mater, to participate in the experiments with my own two hands was a really special experience for me," Murugan says. "Not only because of the personal connection and opportunities to reminisce about the good old days, but more deeply because it is an inspiring thing to see a beautiful theoretical idea come to life before one's eyes."</p><p data-block-key="tcdv">The paper is titled <a href="https://www.nature.com/articles/s41586-023-06890-z">"Pattern recognition in the nucleation kinetics of non-equilibrium self-assembly."</a> In addition to Evans, Murugan, and Winfree, Jackson O'Brien of the University of Chicago is a co-author. Funding was provided by the National Science Foundation, the Evans Foundation for Molecular Medicine, the European Research Council, Science Foundation Ireland, and the Carver Mead New Adventures Fund.</p>Aided by AI, New Catheter Design Prevents Bacterial Infections2024-01-05T16:47:00+00:00Lori Dajoseldajose@caltech.eduhttps://divisions.caltech.edu/newspage-index/aided-by-ai-new-catheter-design-prevents-bacterial-infections<p data-block-key="0dcm0">Bacteria are remarkably good swimmers—a trait that can be detrimental to human health. One of the most common bacterial infections in a healthcare setting comes from bacteria entering the body through catheters, thin tubes inserted in the urinary tract. Though catheters are designed to draw fluids out of a patient, bacteria are able to propel themselves upstream and into the body via catheter tubes using a unique swimming motion, causing $300 million of catheter-associated urinary infections in the U.S. annually.</p><p data-block-key="1u0kt">Now, an interdisciplinary project at Caltech has designed a new type of catheter tube that impedes the upstream mobility of bacteria, without the need for antibiotics or other chemical antimicrobial methods. With the new design, which was optimized by novel artificial intelligence (AI) technology, the number of bacteria that are able to swim upstream in laboratory experiments was reduced 100-fold.</p><p data-block-key="2aq5d">A paper describing the study appears in the journal <i>Science Advances</i> on January 3. The work was a collaboration between the laboratories of <a href="https://www.eas.caltech.edu/people/daraio">Chiara Daraio</a>, G. Bradford Jones Professor of Mechanical Engineering and Applied Physics and Heritage Medical Research Institute Investigator; <a href="https://www.bbe.caltech.edu/people/paul-w-sternberg">Paul Sternberg</a>, Bren Professor of Biology; <a href="https://cce.caltech.edu/people/john-f-brady?back_url=%2Fpeople%3Fcategory%3D%26category%3D3%26search%3D%26submit%3DSearch%2B%25C2%25A0%2B%253E">John Brady</a>, Chevron Professor of Chemical Engineering and Mechanical Engineering; and <a href="https://www.eas.caltech.edu/people/anima">Anima Anandkumar</a>, Bren Professor of Computing and Mathematical Sciences.</p><p data-block-key="bdggi">In catheter tubes, fluid exhibits a so-called Poiseuille flow, an effect where fluid movement is faster in the center but slow near the wall, similar to the flow in a river's current, where the velocity of the water varies from fast in the center to slow near the banks. Bacteria, as self-propelling organisms, exhibit a unique "two-step forward along the wall, one-step back in the middle" motion that produces their forward progress in tubular structures. Researchers in the Brady lab had previously modeled this phenomenon.</p><p data-block-key="dmhgq">"One day, I shared this intriguing phenomenon with Chiara Daraio, framing it simply as a 'cool thing,' and her response shifted the conversation toward a practical application," says Tingtao Edmond Zhou, postdoctoral scholar in chemical engineering and a co-first author of the study. "Chiara's research often plays with all kinds of interesting geometries, and she suggested tackling this problem with simple geometries."</p><p data-block-key="88rla">Following that suggestion, the team designed tubes with triangular protrusions, like shark fins, along the inside of the tube's walls. Simulations yielded promising results: These geometric structures effectively redirected bacterial movement, propelling them toward the center of the tube where the faster flow pushed them back downstream. The triangles' fin-like curvature also generated vortices that further disrupted bacterial progress.</p><p data-block-key="7ksil"></p><embed alt="Gray curved triangles, angled to point right, at the top and bottom. Pill-shaped bacteria traveling left are caught and sent to the right." embedtype="image" format="MiddleAlignMedium" id="35137"/><p data-block-key="5r4vm"></p><p data-block-key="6e4ta">Zhou and his collaborators aimed to verify the design experimentally but needed additional biology expertise. For that, Zhou reached out to Olivia Xuan Wan, a postdoctoral scholar in the Sternberg laboratory.</p><p data-block-key="6j99v">"I study nematode navigation, and this project resonated deeply with my specialized interest in motion trajectories," says Wan, who is also a co-first author on the new paper. For years, the Sternberg laboratory has conducted research into the navigation mechanisms of the nematode <i>Caenorhabditis elegans</i>, a rice grain–sized soil organism commonly studied in research labs and thus had many of the tools to observe and analyze the movements of microscopic organisms.</p><p data-block-key="7rfm8">The team quickly transitioned from theoretical modeling to practical experimentation, using 3D printed catheter tubes and high-speed cameras to monitor bacterial progress. The tubes with triangular inclusions resulted in a reduction of upstream bacterial movement by two orders of magnitude (a 100-fold decrease).</p><p data-block-key="6fh62">The team then continued simulations to determine the most effective triangular obstacle shape to impede bacteria's upstream swimming. They then fabricated microfluidic channels analogous to common catheter tubes with the optimized triangular designs to observe the movement of <i>E. coli</i> bacteria under various flow conditions. The observed trajectories of the <i>E. coli</i> within these microfluidic environments aligned almost perfectly with the simulated predictions.</p><p data-block-key="feta6">The collaboration grew as the researchers aimed to continue improving the geometric tube design. Artificial intelligence experts in the Anandkumar laboratory provided the project with cutting-edge AI methods called neural operators. This technology was able to accelerate the catheter design optimization computations so they required not days but minutes. The resulting model proposed tweaks to the geometric design, further optimizing the triangle shapes to prevent even more bacteria from swimming upstream. The final design enhanced the efficacy of the initial triangular shapes by an additional 5 percent in simulations.</p><p data-block-key="3l3ie">"A collaborative spirit defines Caltech," says Sternberg. "Caltech people help each other. This endeavor was truly an interdisciplinary journey, weaving together diverse fields of study."</p><p data-block-key="582oa">"Our journey from theory to simulation, experiment, and, finally, to real-time monitoring within these microfluidic landscapes is a compelling demonstration of how theoretical concepts can be brought to life, offering tangible solutions to real-world challenges," says Zhou. "I'm very lucky to be at Caltech with so many talented colleagues."</p><p data-block-key="5vtki">The paper is titled <a href="https://www.science.org/doi/10.1126/sciadv.adj1741">"AI-aided geometric design of anti-infection catheters."</a> Zhou and Wan are the study's co-first authors. In addition to Anandkumar, Brady, Sternberg, and Daraio, additional Caltech co-authors are graduate student Zongyi Li and alum Zhiwei Peng (PhD '22). Daniel Zhengyu Huang of Peking University in Beijing, formerly a postdoctoral scholar in the laboratory of Tapio Schneider, the Theodore Y. Wu Professor of Environmental Science and Engineering and JPL senior research scientist, is also a co-author. Funding was provided by the Donna and Benjamin M. Rosen Bioengineering Center, the Heritage Medical Research Institute, the National Science Foundation, the Schmidt Futures program, the PIMCO Future Leaders Scholarship, the Amazon AI4Science Fellowship, and Bren Professorships. Stenberg and Anandkumar are affiliated faculty members with the <a href="https://neuroscience.caltech.edu/">Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech</a>.</p>2023 Year in Review2023-12-18T16:25:42.997628+00:00Kathy Svitilksvitil@caltech.eduhttps://divisions.caltech.edu/newspage-index/2023-year-in-review<p data-block-key="eb6ts">As we close out the year and look ahead to the next, we take this opportunity to reflect on the groundbreaking research findings that emerged from Caltech in 2023. From furthering humanity's knowledge of and response to viruses, to refining the use of autonomous technologies, to leveraging advanced instrumentation to bring greater clarity on our universe and our place within it, Caltech continues to powerfully and meaningfully shape understanding of and interaction with the world. Here are some highlights.</p><h3 data-block-key="3asd6">Shaking and Quaking Earth and Moon</h3><p data-block-key="8mhjq">New insights into earthquake physics that could improve early warning systems emerged from the use of <a href="https://www.caltech.edu/about/news/fiber-optic-cables-detect-and-characterize-earthquakes">existing underground fiber-optic cables</a>, while a study of earthquake swarms on the eastern side of the Sierra Nevada mountains in California led researchers to conclude that the <a href="https://www.caltech.edu/about/news/california-supervolcano-is-cooling-off-but-may-still-cause-quakes">Long Valley caldera</a>, the remains of a volcanic eruption occurring 760,000 years ago, is simply settling from the effects of the ancient event and not heading toward another one.</p><p data-block-key="ef4g4">Far more distant quakes—on the Moon—were examined using data from seismometers placed by Apollo astronauts on the lunar surface five decades ago. With the help of machine learning, researchers showed that the Moon shakes more often and more predictably than Earth, principally because of the extreme swings of temperature experienced by its atmosphere-less surface. Along with these "thermal" moonquakes, <a href="https://www.caltech.edu/about/news/the-lunar-alarm-clock-new-study-characterizes-regular-moonquakes">regular moonquakes</a> were found to occur each morning, caused by vibrations of the abandoned Apollo 17 lunar lander structure as it expands and contracts with changes in surface temperature.</p><p data-block-key="46e3h">The theory that <a href="https://www.caltech.edu/about/news/the-remains-of-an-ancient-planet-lie-deep-within-earth">the Moon itself is the result of a collision between Earth and another planetary body</a>—called Theia—gained support from Caltech researchers, while a modeling study suggested our solar system's inner planets and the moons of the outer planets, as well as numberless "super-Earths" scattered across the universe, may all be the result of <a href="https://www.caltech.edu/about/news/how-do-rocky-planets-really-form">a single mechanism of rocky planet formation</a> that takes place in a narrow band around stars or planets, where competing forces turn vapor into solids.</p><h3 data-block-key="56abi">Surveying the Cosmos and Finding Exoplanets, Two-Faced Stars, and Gravitational Waves</h3><p data-block-key="cva50">Thanks to spectral emission data obtained by the Caltech-led <a href="https://www2.keck.hawaii.edu/inst/kcwi/">Keck Cosmic Web Imager</a>, located at the W. M. Keck Observatory atop Maunakea on the island of Hawai'i, we are able to <a href="https://www.caltech.edu/about/news/cosmic-web-lights-up-in-the-darkness-of-space">view the cosmic web</a>, streams of gas-feeding galaxies that are faint and therefore difficult to visualize, with greater precision than ever before. Caltech teams unveiled a <a href="https://www.caltech.edu/about/news/star-eats-planet-brightens-dramatically">hot gas giant planet about the size of our Jupiter, located some 12,000 light years away, that is being devoured by its sun</a> (just as our Sun will consume Mercury, Venus, and probably Earth, in 5 billion years). Also discovered was a highly unusual white dwarf star that <a href="https://www.caltech.edu/about/news/two-faced-star-exposed">shows two very different faces to Earth-based telescopes</a> as it rotates on its axis every 15 minutes, one composed primarily of hydrogen and the other of helium.</p><p data-block-key="bsa7f">The Nanohertz Observatory for Gravitational Waves (<a href="https://nanograv.org/">NANOGrav</a>), using data from radio telescopes that monitor dead stars known as pulsars, provided Caltech astronomers with increased confidence that in addition to the supermassive events that produce gravitational waves that can be detected by <a href="https://www.ligo.caltech.edu/">LIGO</a> (the Laser Interferometer Gravitational-wave Observatory located in Hanford, Washington, and Livingston, Louisiana), there is a background hum, or <a href="https://www.caltech.edu/about/news/scientists-find-evidence-for-slow-rolling-sea-of-gravitational-waves">slow-rolling sea of gravitational waves</a> throughout the universe. For its part, LIGO extended its observations <a href="https://www.caltech.edu/about/news/ligo-surpasses-the-quantum-limit">beyond the quantum limit</a> using a technology called "squeezing" that allows quantum noise to be manipulated to improve detection and analysis of incoming gravitational waves.</p><p data-block-key="f49ve">Much closer to home, the basement of the Cahill Center for Astronomy and Astrophysics on the Caltech campus has served throughout 2023 as the site for <a href="https://www.caltech.edu/about/news/spherex-space-telescope-stays-cool-in-basement-at-caltech">testing the instrumentation of the SPHEREx (Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer) space telescope</a>, scheduled for launch in 2025. SPHEREx will map the entire sky at infrared wavelengths.</p><h3 data-block-key="2253g">Understanding Ourselves Through Modern Science and Ancient Documents</h3><p data-block-key="2natb">Sodium is a key nutrient for humans, but there can be a fine line between consuming too little or far too much. This year, Caltech biologists pinpointed the areas of the brain that give us an <a href="https://www.caltech.edu/about/news/newly-discovered-brain-circuit-controls-an-aversion-to-salty-tastes">appetite for salt</a> when we need it and the ability to tolerate high levels of salt in food and water. They also used machine learning to gain insight into the <a href="https://www.caltech.edu/about/news/a-theory-of-rage">unique neural mechanisms of anger</a> and our perception of <a href="https://www.caltech.edu/about/news/how-the-brain-creates-your-taste-in-art">beauty in art</a>.</p><p data-block-key="3pfri">Despite their promise and interrogatory power, new technologies cannot probe the behavior of people who lived centuries ago. To that end, a Caltech historian developed <a href="https://www.caltech.edu/about/news/ordinary-early-medieval-lives">a method to learn more about the ordinary people</a> of early medieval Europe using documents collected by ecclesiastical authorities of the time that record economic transactions, marriages, divorces, inheritances, disputes, and more.</p><h3 data-block-key="6tk4e">Improving Human Health Through AI, Wearable Sensors, and Artificial Embryos</h3><p data-block-key="9q830">In 2023, Caltech researchers continued their quest to improve modern medicine and the human condition through a variety of methods, including artificial intelligence (AI), computer models, wearable sensors, cutting-edge imaging technology, and more.</p><p data-block-key="8pvbq">Caltech researchers and colleagues presented a <a href="https://www.caltech.edu/about/news/ai-offers-tool-to-improve-surgeon-performance">new way to use AI to help surgeons evaluate and develop their performance</a>, while Caltech medical engineers further enhanced the capabilities of wearable sweat sensors, which can now monitor <a href="https://www.caltech.edu/about/news/wearable-patch-wirelessly-monitors-estrogen-in-sweat">estrogen</a> and <a href="https://www.caltech.edu/about/news/wearable-sweat-sensor-detects-molecular-hallmark-of-inflammation">C-reactive protein</a> (a marker for inflammation) levels, and developed a <a href="https://www.caltech.edu/about/news/smart-bandages-monitor-wounds-and-provide-targeted-treatment">"smart" bandage</a> that promises to improve chronic wound care by monitoring indications of inflammation or bacterial infection.</p><p data-block-key="3qgkr">An <a href="https://www.caltech.edu/about/news/scientists-create-embryo-like-model-that-mimics-post-implantation-stage-of-human-development">embryo-like model</a> made from stem cells that mimics the second week of human embryo development may soon offer new insights into why some pregnancies fail, where certain defects and diseases emerge, and also help scientists figure out how to develop synthetic human organs for transplant.</p><p data-block-key="bm9dk">At smaller scales, Caltech researchers investigated the mechanisms by which a particular type of bacteriophage (a tiny virus that targets bacterial cells) called <a href="https://www.caltech.edu/about/news/little-phage-that-could">φX174</a> escapes its bacterial host and successfully infects and destroys additional bacterial cells. This work could lead to new treatments for bacterial infections that are resistant to existing antibiotics colleagues. Researchers developed a new <a href="https://www.caltech.edu/about/news/microscopy-techniques-combine-to-create-more-powerful-imaging-device">molecule-imaging apparatus</a> to visualize materials at the single-molecule level, and devised a new <a href="https://www.caltech.edu/about/news/drug-delivery-platform-uses-sound-for-targeting">drug delivery platform</a> using ultrasound and gas vesicles that shows promise for targeting chemotherapy more directly against cancer cells.</p><p data-block-key="156m3">On the COVID-19 front, Caltech researchers developed a <a href="https://www.caltech.edu/about/news/at-home-rapid-covid-test-sensitivity">more sensitive at-home COVID-19 antigen test</a> with a technology that can be utilized to design tests for other pathogens, and combined the two different techniques used in current COVID-19 vaccines—mRNA technology and protein nanoparticle technology—to make a potent <a href="https://www.caltech.edu/about/news/new-vaccine-technology-produces-more-antibodies-against-sars-cov-2-in-mice">hybrid vaccine</a>. In other work, biologists presented new insight into the <a href="https://www.caltech.edu/about/news/imaging-breakthroughs-provide-insight-into-the-dynamic-architectures-of-hiv-proteins">biological processes of the human immunodeficiency virus (HIV)</a> at the atomic scale.</p><h3 data-block-key="6hs6f">Heat Waves, Air Pollution, and Solar Power from Space, Oh My!</h3><p data-block-key="1oajj">Caltech continues to investigate the drivers behind climate change and to develop alternative sources of energy. Since 2014, California state law has required cutting methane emissions, but a Caltech study found <a href="https://www.caltech.edu/about/news/methane-emissions-in-la-are-decreasing-more-slowly-than-previously-estimated">those emissions are decreasing in the Los Angeles area much more slowly than utility companies have estimated</a>. In other work, researchers showed that LA County's recent <a href="https://www.caltech.edu/about/news/low-income-areas-experience-hotter-temperatures-in-la-county">record-breaking heat waves hit low-income areas harder than more affluent ones</a>, and developed new techniques for understanding the chemistry involved in <a href="https://www.caltech.edu/about/news/chemists-tackle-formation-of-natural-aerosols">the naturally occurring conversion of volatile organic compounds into aerosols</a> that will allow scientists to better predict the impact of aerosols on the environment and on human health.</p><p data-block-key="8vbvc">On the green energy front, in <a href="https://www.caltech.edu/about/news/caltech-to-launch-space-solar-power-technology-demo-into-orbit-in-january">January 2023</a>, the Caltech Space Solar Power Project (SSPP) launched an instrument into orbit around Earth that harvests solar power and wirelessly transmits it to Earth. In the spring, this instrument, the Space Solar Power Demonstrator (SSPD) was the first to <a href="https://www.caltech.edu/about/news/in-a-first-caltechs-space-solar-power-demonstrator-wirelessly-transmits-power-in-space">successfully receive solar power and transmit it to Earth</a>, where it was detected by a receiver on the rooftop of the Gordon and Betty Moore Laboratory of Engineering on Caltech's campus.</p><h3 data-block-key="3irl5">Evolving Optics, Color-Changing Plastics, and Mighty Morphin' Robots</h3><p data-block-key="249hr">Caltech scientists and engineers identified, engineered, and designed a series of new devices and materials that have the potential to reshape our world, including the creation of <a href="https://www.caltech.edu/about/news/a-rainbow-of-force-activated-pigments">polymers that change color when stress is applied to them</a>, making the location of strain visible; metals 3D printed at the nanoscale with messy atomic arrangements that surprisingly make them three-to-five times stronger than similarly sized materials with more orderly structures; and <a href="https://www.caltech.edu/about/news/evolving-and-3d-printing-new-nanoscale-optical-devices">3D-printed nanoscale optical devices</a> that are so small they could direct different colors of light to individual pixels in a camera's image.</p><p data-block-key="2igj0">On a larger scale, Caltech engineers created M4, the Multi-Modal Mobility Morphobot, a <a href="https://www.caltech.edu/about/news/new-bioinspired-robot-flies-rolls-walks-and-more">bioinspired robot</a> that is capable of eight different types of motion (including flying, rolling, and walking) and can sense upcoming terrain and select the most effective form of locomotion.</p><h3 data-block-key="b1a39">Quantum Sound, Quantum Microscopes, Quantum Erasers, and a New Center for Quantum Research</h3><p data-block-key="74rmb">Caltech expanded its presence as a premier hub of quantum research with the summer <a href="https://www.caltech.edu/about/news/breaking-ground-cqpm">groundbreaking of the Dr. Allen and Charlotte Ginsburg Center for Quantum Precision Measurement</a>. The center will serve as an interdisciplinary home for precision measurement, quantum information, and the detection of gravitational waves, or ripples in space-time.</p><p data-block-key="70nr9">In other news concerning the quantum realm, a new method was revealed for <a href="https://www.caltech.edu/about/news/new-device-opens-door-to-storing-quantum-information-as-sound-waves">converting electrical quantum states into sound and back again</a>, allowing devices to store sound (which, like light, is both a particle and a wave) for future quantum computers. Other researchers <a href="https://www.caltech.edu/about/news/quantum-entanglement-of-photons-doubles-microscope-resolution">doubled the resolution of microscopes</a> through quantum entanglement, in which the respective states of two particles are linked to one another even when they are not close to one another. Entanglement is central to this year's development of "<a href="https://www.caltech.edu/about/news/a-new-way-to-erase-quantum-computer-errors">quantum erasers</a>" that can remove certain types of errors in quantum computers.</p><h3 data-block-key="d83j1">Slithering, Swimming, and Spinning—Animal Motion by the Numbers</h3><p data-block-key="8jsg1">Finally, as phenomena in the natural world are mapped mathematically, surprising consonances are uncovered, including the discovery that when very different animals—such as snakes, single-celled organisms, and sting rays—<a href="https://www.caltech.edu/about/news/what-do-a-jellyfish-a-cat-a-snake-and-an-astronaut-have-in-common-math">move by changing their shape, a single mathematical algorithm</a> can successfully describe their motion.</p>Caltech Postdoc Wins L'Oréal For Women in Science Award2023-12-07T08:18:00+00:00Cynthia Ellerceller@caltech.eduhttps://divisions.caltech.edu/newspage-index/2023-LOreal-award<p data-block-key="7idkv">Each year the <a href="https://www.loreal.com/en/usa/news/commitments/lusa-announces-2023-for-women-in-science-awardees/">L'Oréal USA For Women in Science</a> fellowship program grants five $60,000 awards "to cultivate a postdoctoral community of women, empowering them to persist in their research, attain leadership roles, and become inspirational mentors for the generations of women and girls that will follow in their path."</p><p data-block-key="chtfh">This year Jessleen (Jess) Kanwal, a postdoctoral scholar research associate in the laboratory of Joe Parker, assistant professor of biology and biological engineering and Chen Scholar, received this grant for her work with rove beetles and her plans to generate interest in STEM fields among teenagers through workshops on dance and neuroscience. We recently had a conversation with Kanwal about her research at Caltech and how she plans to use her award.</p><p data-block-key="2cmjl"><b>What is the research you will be supporting through your L'Oréal For Women in Science grant?</b></p><p data-block-key="64kf2">I'm interested in how organisms combine cues across sensory modalities to distinguish friend from foe from food. How do they detect what's a threat and then make rapid decisions about whether to defend themselves or flee? To answer these questions, I am working with tiny little insects called rove beetles. They're only about 2.5 millimeters in length and half a millimeter wide. Ultimately, I want to know how their nervous system integrates different sensory cues from the organisms they interact with, transforms this into information about what type of animal they are facing—whether it be predator, prey, or another rove beetle—and then decides how best to respond.</p><p data-block-key="cpugp"><b>How did you get interested in this work?</b></p><p data-block-key="bi7pi">I've been fascinated by both bugs and brains ever since participating in a fun summer research experience as an undergraduate. I find it amazing how insects perceive and navigate through the world, the many ways they interact with each other, and the behaviors they are capable of performing. The world looks, smells, and feels very different to them than it does to us. Even though they are so small and experience the environment at a completely different scale from us, they are experts at finding food, avoiding danger, and communicating with each other. I enjoyed observing this firsthand during one of my undergraduate research experiences, where I had the opportunity to watch honeybees dance. This is how they tell their hive mates where the best food is located. Seeing insects perform complex behaviors like the honeybee waggle dance, made me wonder how their tiny brains detect, combine, and represent all the sensory information they need to survive.</p><p data-block-key="dqfht"><b>Do rove beetles have any special abilities like that?</b></p><p data-block-key="830m9">Yes. They have an amazing chemical defense gland in their abdomen. Whenever they are attacked by predators, they flex their abdomen and smear the contents of this gland onto the threat. The gland releases toxic chemicals that deter predators from killing them.</p><p data-block-key="2scb6"><b>That would be a great talent to bring to a nightclub! Is this trait of rove beetles the reason that you're interested in working with them?</b></p><p data-block-key="4f6j3">Yes, partly. No one has ever looked into the brains of these beetles. We're exploring their nervous system for the first time, and we think that this new system is going to be really insightful for understanding the neuroscience of how insects interact with other species. Another reason these beetles are so fascinating to study is that we have the potential to explore how their nervous system has evolved to enable new behavioral interactions between species.</p><p data-block-key="2apr6"><b>How do you learn how a rove beetle's brain interprets its environment?</b></p><p data-block-key="fe0ur">Currently, we are examining this question in two ways. First, we can learn a lot about the brain of a rove beetle through behavioral observations combined with genetic manipulation of neural cell types. We first develop special arenas in the lab to quantify beetle behavior. For example, in one arena beetles are tethered on a floating ball—it's like walking on a treadmill—and we gradually present a predatory ant to them and watch what happens. With multiple cameras surrounding the beetle, these types of experiments allow us a very high-resolution, stereotyped, and quantifiable method to examine beetle behavior during interactions with other species. We have also built arenas where rove beetles and other insects are able to freely roam in a 3D environment, and we watch how they interact. A lot of my time in the lab is spent watching an action-packed beetle reality TV show. It's filled with mystery about what's going to happen when the beetle encounters another organism and what cues from the environment are essential to trigger its response. We then use genetic tools to silence certain neurons, like those required for the beetle to smell, and observe how this alters their ability to distinguish and interact with other species.</p><p data-block-key="biqjo">The second way we are probing the beetle brain is by mapping out the architecture of their nervous system. It must be done under a microscope, of course, because they're so tiny. On dissection days we stay away from coffee or tea because even the slightest bit of jittery hands can ruin these dissections. But by doing this we can look inside their brain and their spinal cord equivalent to identify the structure of key regions that may detect and process sensory cues from the organisms they interact with. We especially want to map out the smell and taste areas of their brain because we suspect that these regions are really important for their detection of predators and prey. Ultimately, we plan to use genetic tools and fluorescence indicators to read out the activity of beetle brain neurons in response to sensory cues from other species. This will enable us to see what cues are most salient and how the brain encodes information about other organisms.</p><p data-block-key="e1alh"><b>Why did you want to come to Caltech as a postdoc?</b></p><p data-block-key="7a2du">I was intrigued by the beetles and attracted to the science happening in Joe's lab. After spending graduate school studying how fruit fly larvae use smell and taste to find food, I was excited to explore how these cues enable organisms to interact with one another: basically, how the brain coordinates complex behavioral interactions. Also, the prospect of working in a new system where I could study the animal's behavior in its ecological context was very exciting.</p><p data-block-key="d57be">Actually, given how my postdoc began, my interest in studying behavior turned out to be very convenient. I started at Caltech in February 2020 and only had a few weeks in the lab before the pandemic shut us down. About a month into lockdown, I started going out and collecting insects. I got a camera and a small stage, and a box of rove beetles from the lab. Then I spent much of the pandemic recording 20-minute videos of insect interactions on my kitchen table. I used machine-vision software to track their behaviors and started to see what a day in the life of these beetles was like. Beetles spent very little time interacting with potentially harmful species, choosing to flex their abdomen, or flee during close encounters. Meanwhile, the beetles spent a lot more time investigating the species that did not pose a threat, using their antennae to sense the organisms. So how do the beetles distinguish between these different species? Is it because they smell different, look different, move different from predators, or some combination of these? Clearly making this distinction is critical to their survival, and the beetles have to be selective about when to deploy their chemicals against potential predators as they have a limited supply at any given moment.</p><p data-block-key="dp7ik"><b>The L'Oréal award supports your research but is also intended to enable outreach to girls and women in STEM. How will you be working toward this goal?</b></p><p data-block-key="71mco">I'm really excited to use this award to develop outreach workshops at the intersection of science and the performing arts. I've started to collaborate with some local dance groups in the LA area, and we're designing workshops on the neuroscience of dance that will target underserved communities in Pasadena and the greater Los Angeles area. Our goal is to use dance as a way to get middle and high school students interested in and excited to learn about the brain and how it senses the world and coordinates creative movement. I will also be using the funds to hire and support a young female research technician to work alongside me on my research projects in the lab. I hope to provide her with the mentorship and career support needed to successfully transition to graduate school and a fulfilling career in science.</p>Advancements Make Laser-Based Imaging Simpler and Three-Dimensional2023-12-01T16:46:00+00:00Emily Velascoevelasco@caltech.eduhttps://divisions.caltech.edu/newspage-index/advancements-make-laser-based-imaging-simpler-and-three-dimensional<p data-block-key="nv5pd">There are times when scientific progress comes in the form of discovering something completely new. Other times, progress boils down to doing something better, faster, or more easily.</p><p data-block-key="3b8vm">New research from the lab of Caltech's <a href="https://www.eas.caltech.edu/people/lvw">Lihong Wang</a>, the Bren Professor of Medical Engineering and Electrical Engineering, is the latter. In a paper published in the journal <i>Nature Biomedical Engineering,</i> Wang and postdoctoral scholar Yide Zhang show how they have simplified and improved <a href="https://www.caltech.edu/about/news/advancement-simplifies-laser-based-medical-imaging">an imaging technique</a> they first announced in 2020.</p><p data-block-key="b46ar">That technique, a form of photoacoustic imaging technology called PATER (Photoacoustic Topography Through an Ergodic Relay), is a specialty of Wang's group.</p><p data-block-key="codm6">In photoacoustic imaging, laser light is pulsed into tissue where it is absorbed by the tissue's molecules, causing them to vibrate. Each vibrating molecule serves as a source of ultrasonic waves that can be used to image the internal structures in a fashion similar to how ultrasound imaging is performed.</p><p data-block-key="aq9ut">However, photoacoustic imaging is technologically challenging because it produces all its imaging information in one short burst. To capture that information, early versions of Wang's photoacoustic imaging technology required arrays of hundreds of sensors (transducers) to be pressed against the surface of the tissue being imaged, which made the technology complicated and expensive.</p><p data-block-key="e86u">Wang and Zhang reduced the number of required transducers by using a device called an ergodic relay, which slows down the rate at which information (in the form of vibrations) flows into a transducer. As explained in a <a href="https://www.caltech.edu/about/news/advancement-simplifies-laser-based-medical-imaging">previous story</a> about PATER:</p><indent data-block-key="efhpm"><i>In computing, there are two main ways to transmit data: serial and parallel. In serial transmission, the data are sent in a single stream through one communication channel. In parallel transmission, several pieces of data are sent at the same time using multiple communication channels.</i></indent><indent data-block-key="6pq2f"><i>The two types of communication are roughly analogous to the way cash registers might be used in a store. Serial communication would be like having one cash register. Everyone gets in the same line and sees the same cashier. Parallel communication would be like having several registers and a line for each.</i></indent><indent data-block-key="f89ls"><i>The system Wang designed with 512 sensors is similar to the store with many cash registers. All of the sensors are working at the same time, with each taking in part of the data about the ultrasonic vibrations generated by the laser pulse.</i></indent><indent data-block-key="9nbde"><i>Since the ultrasonic vibrations from the system come in one short burst, a single sensor would be overwhelmed if it were used to try and collect all the data in that short amount of time. That's where the ergodic relay comes in.</i></indent><indent data-block-key="4n8a9"><i>As Wang describes it, an ergodic relay is a sort of chamber around which sound can echo. When the ultrasonic vibrations pass through the ergodic relay, they are stretched out in time. To return to the cash-register metaphor, it would be like having another employee assisting the single cashier by telling the customers to walk a few laps around the store until the cashier is ready to see them, so the cashier does not become overwhelmed.</i></indent><p data-block-key="e63sn">The latest version of this technology, called PACTER (Photoacoustic <i>Computed</i> Tomography Through an Ergodic Relay) goes even further, allowing the system to operate using a single transducer that, through the use of software, can collect as much data as 6,400 transducers.</p><p data-block-key="ceroo"></p><embed embedtype="media" url="https://www.youtube.com/watch?v=L7r5eNtaLq4"/><p data-block-key="e98r8"></p><p data-block-key="eha2b">PACTER improves on PATER in two other ways, says Wang, who is also the Andrew and Peggy Cherng Medical Engineering Leadership Chair and executive officer for medical engineering.</p><p data-block-key="2tsfh">One improvement is that PACTER can create three-dimensional images, whereas PATER can only generate 2D images. This was enabled by the development of improved software.</p><p data-block-key="enes2">"Transitioning to 3D imaging significantly escalates the data requirement. The challenge was funneling the immensely increased data through a single transducer," Zhang says. "Our solution emerged by altering our approach. Rather than a direct and computationally intensive method of reconstructing 3-D images from the single-transducer data, we first expanded one transducer into thousands of virtual ones. This idea simplified the process of 3D image reconstruction, aligning it more closely with the traditional methods in our photoacoustic imaging."</p><p data-block-key="eiq74">Secondly, unlike PATER, PACTER does not need to be calibrated each time it is used.</p><p data-block-key="h2ie">"With PATER, we had to calibrate it each time to use it and that's just not practical. We got rid of this per-use single-time calibration," Wang says.</p><p data-block-key="do8e1">Calibration was needed because when the system fires a pulse of laser light into tissue, an "echo" of that pulse would bounce back into the transducer, preventing it from sensing direct ultrasound information.</p><p data-block-key="efd2p">Wang says PACTER gets around that issue by adding something called a delay line to the system. The delay line forces the echo to take a longer physical path on its way back to the transducer so that it arrives after the direct ultrasound information has been received.</p><p data-block-key="7cgq2">"Even though I always said this was possible, I knew it would be challenging," Wang says.</p><p data-block-key="8kegl">The paper describing the work, "Ultrafast longitudinal imaging of haemodynamics via single-shot volumetric photoacoustic tomography with a single-element detector," appears in the November 30 issue of <i>Nature Biomedical Engineering.</i> Co-authors are Peng Hu (PhD '23), former graduate student in medical engineering; Lei Li (PhD '19), former postdoc in medical engineering; Rui Cao, postdoc in medical engineering; Anjul Khadria, former postdoc in medical engineering; Konstantin Maslov, former staff scientist at Caltech; Xin Tong, graduate student in medical engineering; and Yushun Zeng, Laiming Jiang, and Qifa Zhou of USC.</p><p data-block-key="e9bit">Funding for the research was provided by National Institutes of Health.</p>Ultrasound Enables Less-Invasive Brain–Machine Interfaces2023-11-30T19:03:00+00:002023-11-30T20:07:00.451048+00:00Lori Dajoseldajose@caltech.eduhttps://divisions.caltech.edu/newspage-index/ultrasound-enables-less-invasive-brainmachine-interfaces<p data-block-key="q1u98">Brain–machine interfaces (BMIs) are devices that can read brain activity and translate that activity to control an electronic device like a prosthetic arm or computer cursor. They promise to enable people with paralysis to move prosthetic devices with their thoughts.</p><p data-block-key="2p12b">Many BMIs require invasive surgeries to implant electrodes into the brain in order to read neural activity. However, in <a href="https://www.caltech.edu/about/news/reading-minds-with-ultrasound-a-less-invasive-technique-to-decode-the-brains-intentions">2021</a>, Caltech researchers developed a way to read brain activity using functional ultrasound (fUS), a much less invasive technique.</p><p data-block-key="dca5i">Now, a new study is a proof-of-concept that fUS technology can be the basis for an "online" BMI—one that reads brain activity, deciphers its meaning with decoders programmed with machine learning, and consequently controls a computer that can accurately predict movement with very minimal delay time.</p><p data-block-key="4vcbm">The study was conducted in the Caltech laboratories of <a href="https://www.bbe.caltech.edu/people/richard-a-andersen">Richard Andersen</a>, James G. Boswell Professor of Neuroscience and director and leadership chair of the <a href="https://neuroscience.caltech.edu">T&C Chen Brain–Machine Interface Center</a>; and <a href="https://cce.caltech.edu/people/mikhail-g-shapiro?back_url=%2Fpeople%3Fcategory%3D%26category%3D3%26search%3D%26submit%3DSearch%2B%25C2%25A0%2B%253E">Mikhail Shapiro</a>, Max Delbrück Professor of Chemical Engineering and Medical Engineering and Howard Hughes Medical Institute Investigator. The work was a collaboration with the laboratory of Mickael Tanter, director of physics for medicine at INSERM in Paris, France.</p><p data-block-key="4ptfr">"Functional ultrasound is a completely new modality to add to the toolbox of brain–machine interfaces that can assist people with paralysis," says Andersen. "It offers attractive options of being less invasive than brain implants and does not require constant recalibration. This technology was developed as a truly collaborative effort that could not be accomplished by one lab alone."</p><p data-block-key="f74mi">"In general, all tools for measuring brain activity have benefits and drawbacks," says Sumner Norman, former senior postdoctoral scholar research associate at Caltech and a co-first author on the study. "While electrodes can very precisely measure the activity of single neurons, they require implantation into the brain itself and are difficult to scale to more than a few small brain regions. Non-invasive techniques also come with tradeoffs. Functional magnetic resonance imaging [fMRI] provides whole-brain access but is restricted by limited sensitivity and resolution. Portable methods, like electroencephalography [EEG] are hampered by poor signal quality and an inability to localize deep brain function."</p><p data-block-key="ftpga">Ultrasound imaging works by emitting pulses of high frequency sound and measuring how those sound vibrations echo throughout a substance, such as various tissues of the human body. Sound waves travel at different speeds through these tissue types and reflect at the boundaries between them. This technique is commonly used to take images of a fetus <i>in utero</i>, and for other diagnostic imaging.</p><p data-block-key="5p3jk">Because the skull itself is not permeable to sound waves, using ultrasound for brain imaging requires a transparent "window" to be installed into the skull. "Importantly, ultrasound technology does not need to be implanted into the brain itself," says Whitney Griggs (PhD '23), a co-first author on the study. "This significantly reduces the chance for infection and leaves the brain tissue and its protective dura perfectly intact."</p><p data-block-key="dc4f4">"As neurons' activity changes, so does their use of metabolic resources like oxygen," says Norman. "Those resources are resupplied through the blood stream, which is the key to functional ultrasound." In this study, the researchers used ultrasound to measure changes in blood flow to specific brain regions. In the same way that the sound of an ambulance siren changes in pitch as it moves closer and then farther away from you, red blood cells will increase the pitch of the reflected ultrasound waves as they approach the source and decrease the pitch as they flow away. Measuring this Doppler-effect phenomenon allowed the researchers to record tiny changes in the brain's blood flow down to spatial regions just 100 micrometers wide, about the width of a human hair. This enabled them to simultaneously measure the activity of tiny neural populations, some as small as just 60 neurons, widely throughout the brain.</p><p data-block-key="blk9e">The researchers used functional ultrasound to measure brain activity from the posterior parietal cortex (PPC) of non-human primates, a region that governs the planning of movements and contributes to their execution. The region has been studied by the Andersen lab for decades using other techniques. The animals were taught two tasks, requiring them to either plan to move their hand to direct a cursor on a screen, or plan to move their eyes to look at a specific part of the screen. They only needed to <i>think</i> about performing the task, not actually move their eyes or hands, as the BMI read the planning activity in their PPC.</p><p data-block-key="3qn4h">"I remember how impressive it was when this kind of predictive decoding worked with electrodes two decades ago, and it's amazing now to see it work with a much less invasive method like ultrasound," says Shapiro.</p><p data-block-key="3a6f">The ultrasound data was sent in real-time to a decoder (previously trained to decode the meaning of that data using machine learning), and subsequently generated control signals to move a cursor to where the animal intended it to go. The BMI was able to successfully do this to eight radial targets with mean errors of less than 40 degrees.</p><p data-block-key="ap122">"It's significant that the technique does not require the BMI to be recalibrated each day, unlike other BMIs," says Griggs. "As an analogy, imagine needing to recalibrate your computer mouse for up to 15 minutes each day before use."</p><p data-block-key="8eu86">Next, the team plans to study how BMIs based on ultrasound technology perform in humans, and to further develop the fUS technology to enable three-dimensional imaging for improved accuracy.</p><p data-block-key="2mr9t">The paper is titled <a href="https://www.nature.com/articles/s41593-023-01500-7">"Decoding motor plans using a closed-loop ultrasonic brain–machine interface"</a> and appears in the journal <i>Nature Neuroscience</i> on November 30. Whitney Griggs (PhD '23), UCLA-Caltech MD/PhD student, and Sumner Norman, former postdoctoral scholar now of Forest Neurotech, are the study's first authors. In addition to Griggs, Norman, and Andersen, Caltech coauthors are graduate student Geeling Chau and Vasileios Christopoulos, visiting associate in biology and biological engineering. Other coauthors are Charles Liu of USC; and Mickael Tanter, Thomas Deffieux, and Florian Segura of INSERM in Paris, France. Funding was provided by the National Eye Institute, a Josephine de Karman Fellowship, the UCLA-Caltech MSTP, the Della Martin Foundation, the National Institute of Neurological Disorders and Stroke, the National Institutes of Health, the T&C Chen Brain-Machine Interface Center, and the Boswell Foundation.</p>Attention, Focus, and a High Risk of Alzheimer's2023-11-30T18:19:43.547174+00:00Lori Dajoseldajose@caltech.eduhttps://divisions.caltech.edu/newspage-index/attention-focus-and-a-high-risk-of-alzheimers<p data-block-key="q6jb0">Alzheimer's disease is a neurodegenerative condition that damages a person's ability to think, remember, and perform basic functions. According to the National Institutes of Health, Alzheimer's affects more than 6 million Americans, mostly ages 65 and older. Though the neurological damage from the disease is irreversible, its progression can be slowed by early interventions such as exercise and nutrition regimens. Thus, an early screening for Alzheimer's risk can be vital in helping people manage and plan for their symptoms.</p><p data-block-key="8thhi">However, before the onset of Alzheimer's physical symptoms, the primary method to measure an individual's risk of developing the disease is by measuring levels of certain proteins in cerebrospinal fluid (higher levels indicate higher risk). This test is invasive, painful, and expensive.</p><p data-block-key="3803h">A team from Caltech and the Huntington Medical Research Institutes is currently conducting an ongoing project to develop a simple behavioral test to detect a person's Alzheimer's risk, as noninvasive as solving a puzzle on the computer. In 2022, the team developed a behavioral test whose results accurately correlated with spinal fluid measurements.</p><p data-block-key="3g5ia">Now, the team has used the test to discover more about high-risk individuals' ability to pay attention and focus. The work, described in a paper appearing in the journal <i>GeroScience</i>, suggests that high-risk individuals are using their attention to process, rather than suppress, distracting stimuli. The research was conducted in the Caltech laboratory of <a href="https://www.bbe.caltech.edu/people/shinsuke-shin-shimojo">Shinsuke Shimojo</a>, Gertrude Baltimore Professor of Experimental Psychology. Shimojo is an affiliated faculty member with the <a href="https://neuroscience.caltech.edu">Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech</a>.</p><p data-block-key="22cre">"It has been all the researchers' dream in the field to come up with a very sensitive psychological paradigm to detect subtle pre-symptoms in the high-risk elderly," says Shimojo. "However, it was nearly impossible because those high-risk elderly are <i>not</i> diagnosed with the current official standard tests. Our success was owing to two new twists: First, implicit cognitive processing that requires attention. And second, the hypothesis that the cognitive limitation would reveal only under high task load."</p><p data-block-key="bclp0">In the test, a participant completes a so-called Stroop Paradigm task. This is a common puzzle in which a person is shown a word—the word is the name of a color—displayed on a computer monitor in colored text. However, the word itself does not necessarily match the color it is displayed in—for example, the word "RED" could be displayed in the color green. In each iteration of the task, the participant is asked to name either the color of the word or the word itself. Compared to naming the word itself, naming the color of the text is considered "high effort"—it is more challenging than it might seem. (You can try it yourself below.)</p><p data-block-key="7s9mt"></p><embed alt="The word "RED" in green color" embedtype="image" format="MiddleAlignMedium" id="34941"/><p data-block-key="9u2vt">But researchers have also added an extra twist to make the task a bit more challenging. Right before the actual target is shown, a word (white on a white background, and "masked" by several meaningless symbols) is flashed rapidly on the screen—so rapidly that a participant cannot detect it consciously. (See the video to the right of the text for an example.)</p><p data-block-key="3lhl1">The white word—technically called an "implicit distractor"—is intended to unconsciously distract the participant. In addition to conscious and intentional information gathering, which is known as "explicit cognition," our brains have the ability to process sensory information without conscious awareness. This is known as "implicit cognition."</p><p data-block-key="8uec">The study involved 36 people with an average age of 75 who were cognitively healthy. Each underwent myriad tests related to Alzheimer's risk: magnetic resonance imaging (MRI) of the brain, genome sequencing, and the aforementioned invasive cerebrospinal fluid measurements. From these biological markers, individuals could be categorized as high or low risk.</p><p data-block-key="dbutu">In the 2022 study, the team found that individuals who were at high risk for developing Alzheimer's (as measured by their spinal-fluid levels) slowed down by about 5 percent with the presence of the implicit distractor in the high-effort condition. This implicit interference was not found in the low-risk individuals. These findings suggest that implicit cognition may be altered years before the onset of any classic Alzheimer's symptoms.</p><p data-block-key="3rsus">The new study focused on understanding <i>how</i> the individuals were using their attention during the test. Think of attention as a kind of currency—a finite resource your brain can spend. We all have experienced our attention being distracted from a specific task. Perhaps your phone notifications or a noisy room are distracting your attention from reading this article right now. The team aimed to determine if the high-risk population is using their attention to <i>process</i> the distracting word instead of suppressing the distraction and blocking it out.</p><p data-block-key="5e98i">"Your brain will unconsciously perceive the distracting word whether you have a high or low risk for Alzheimer's," says Shao-Min (Sean) Hung, a former postdoctoral scholar in the Shimojo group, currently an assistant professor at Waseda University in Japan and the study's co-first author. "But we wanted to study what does your brain do next? Do you use your effort to suppress the distraction or do you use effort to process the distractor? Healthy individuals with low risk of cognitive impairment should be able to suppress the distraction."</p><p data-block-key="874q3">To examine this, the team had the same volunteers complete the task twice, two weeks apart. The idea is that <i>practice</i> reduces the mental load of the task and allows you to have more attention available. For example, if you're an experienced soccer player, you may be able to easily dribble a ball while using some of your attention to process other things in your environment. But if you're new to soccer, you need to use a lot of attention and focus to properly dribble the ball. Practice frees up attention for your brain to use elsewhere.</p><p data-block-key="5qhno">The researchers found that after practicing the task, the low-risk individuals utilize their extra attention to <i>suppress</i> the distracting word and thus are less distracted. On the contrary, the high-risk individuals use their extra attention to <i>process</i> the distracting word—taking in unnecessary information that distracts them from the task at hand and results in stronger interference to their performance. These distractions did not lead to significantly worse performance overall, but the distraction was evident, in that high-risk individuals who had practice effect (faster in the second task) slowed down even more with the presence of the distracting word.</p><p data-block-key="5qn5i">"These results suggest that there is a tight link between implicit cognition and attention, and the changes in implicit cognition in the high-risk population could reflect very early shift in how attention is utilized," says Hung.</p><p data-block-key="bum78">The study is titled <a href="https://link.springer.com/article/10.1007/s11357-023-00953-9">"Practice makes imperfect: stronger implicit interference with practice in individuals at high risk of developing Alzheimer's disease."</a> Caltech research technician-assistant Sara Adams (BS '21) is co-first author along with Hung. In addition to Hung, Adams, and Shimojo, other co-authors are Cathleen Molloy and Xianghong Arakaki of the Huntington Medical Research Institutes and Caltech senior scientist Daw-An Wu (PhD '06). Funding was provided by the James Boswell Postdoctoral Fellowship, a Caltech Biology and Biological Engineering Divisional Postdoctoral Fellowship, and the Aligning Consciousness Research with US Funding Mechanisms by Templeton World Charity Foundation, the Whittier Foundation, and the National Institutes of Health.</p>