SciAPP Conference - 2019

SciAPP Conference

The 2019 Science, the Arts and Possibilities in Perception (SciAPP) workshop brought together perception scientists, engineers, entrepreneurs and artists with the goal of expanding the experience of perception. The workshop included talks from experts in neuroscience, psychology, physics, math, engineering, science communication, business and art and included music performances and art shows.

 SciAPP was created by ASU’s SciHub, which is co-directed by Frank Wilczek, Distinguished Origins Professor and the 2004 Nobel laureate in physics, and Nathan Newman, ASU’s Lawrence Professor of Solid State Sciences. More information about the genesis of the SciAPP workshop can be found here.

 This page hosts a sampling of what happened at the 2019 SciAPP workshop. Please check back for information about future workshops.

Upcoming SciApp Workshop

The SciApp Workshop provides a forum for presentations and discussions of recent developments, critical issues and potential future directions of perception theory with an emphasis on the arts. In the workshop, 16 invited speakers will discuss “big questions” that connect boundaries of science, the arts, and perception theory.  Topics range from fundamental to applied. Each speaker will give a brief introduction regarding the current state of the field, aimed at understanding steps that may be taken to answer fundamental questions and impact on science, art, and our society.   Each speaker is given 40 minutes for presentation and 10 minutes for discussion. The workshop itself is limited to 70 additional participants,  making this  small enough to be highly interactive,  but large enough to be comprehensive.

The ASU SciHub brings together diverse university and community groups to create an integrated research, teaching, outreach, and product development program. SciHub includes faculty, staff and students, as well as local teachers, scientists, engineers, medical professionals, product developers, designers, museum conservators, and artists.

 Current projects include development of commercial products, research instrumentation, museum exhibits, and university and K-12 Outreach programs. The workshop is organized in collaboration with SAMBA: Science of Art, Music, & Brain Activity research group.  


Featured Conference Talks:


Brian Wandell Talk

Brian A. Wandell
Isaac and Madeline Stein Family Professor
Department of Psychology
Stanford University

When we look at something, what we see is not a video recording from our eyes. What we see is an assumption made by our brain.

Physicists and neuroscientists have decoded how the brain processes visual signals like sunlight, color and reflections, but understanding what we see requires psychology. We see illusions because the of the inferences (which are generally very good) the brain is continually making based on the visual input it receives from our eyes.

In this image, the two gray squares are exactly the same shade, even though the square on the left looks lighter. Because the background on the left is darker, the brain infers the intensity of light coming from that region must be less overall. This assumption makes the center square look whiter than it really is.

To understand how the brain interprets all the visual signals the eyes receive, vision scientists are starting to combine neuroscience methods with strategies astronomers use to look through the atmosphere into space. In natural vision, people never experience the excitation of just one cone cell. Cone cells in the retina at the back of the eye are color detectors. Usually at least three to five cone cells are excited at a time. Researchers are now using optical lenses to deliver precise visual information to the eye that is capable of exciting just one cone at a time. Experiments like these allow precise control of the visual information coming into the brain and could lead to scientists uncovering more of the rules and strategies the brain uses to infer what we see.


Visar Berisha
Assistant Professor
College of Health Solutions and School of Electrical, Computer and Energy Engineering
Arizona State University

 What we say is more than just words. Speech can open a window into the brain and serve as a status signal of neurological health.

Scientists know which parts of the brain, mouth and musculature are responsible for different aspects of speech like how fast we talk and how we pronounce words. Slurred words or talking in a monotone can provide information about what is happening in the brain. Speech has already been used as a signal to track the progression of changes in the brain from neurodegenerative diseases like Parkinson’s disease, Alzheimer’s disease or amyotrophic lateral sclerosis (ALS).

Because of the prevalence of smart phones, collecting speech is easy. Tracking patterns in speaking rate, tone, articulation and vocabulary can be used to supplement other ways of monitoring the progression of a neurodegenerative disease, like neurological examinations and magnetic resonance images. Engineers, computer scientists and neurologists are now working on how to use speech signals to predict clinical outcomes, like which people might have a disease and how a disease is progressing.


Flip Phillips
Professor of Psychology and Neuroscience
Skidmore College 

Exaggeration in art, and especially animation, can make the subject of a painting or a cartoon seem more real. Artists have known about this for centuries, and animators have used exaggeration since the earliest cartoons were projected on a movie screen.

Before earning his doctorate in cognitive psychology, Phillips worked as an animation scientist at Pixar in the late 1980s. In this talk, he describes how he and his colleagues at Pixar would base the animation of how an object moved, such as a spring bouncing on a toy’s head, based on the laws of physics. But then they had to modify the final animation by hand because it just didn’t look right.

Those modifications were exaggerations of movement that actually defy the laws of physics.

The path a bouncing ball really takes – the one that is defined by the laws of physics – looks unrealistic to us when we watch it as a cartoon. For an animated bouncing cartoon ball to seem real, it has to stretch out into an oval shape before hitting the ground. Animators have drawn bouncing balls in this way for decades, and Phillips has conducted a series of psychology experiments showing people label this kind of animation as more plausible. But, if the shape of a bouncing ball were to actually stretch out in this way before hitting the ground, the atmosphere would have to be so thick that the ball would probably light on fire.