Neuro AI is a new discipline, which aims to promote the development of AI technology by studying human brain, and use AI to better study human brain. One of the core tools of neural AI is to use artificial neural networks to create computer models of specific brain functions. This method began in 2014, when researchers at MIT and Columbia University found that deep artificial neural networks could explain the reaction process of the brain's object recognition region, the infratemporal cortex (IT). So they introduced a basic experimental method: comparing the artificial neural network with the brain. Then repeatedly and iteratively test various brain reaction processes: shape recognition, motion processing, speech processing, arm control, spatial memory, etc., and establish corresponding brain processing models for each reaction.

What is Neuro AI?

Neuro AI is a new discipline, which aims to promote the development of AI technology by studying human brain, and use AI to better study human brain. One of the core tools of neural AI is to use artificial neural networks to create computer models of specific brain functions. This method began in 2014, when researchers at MIT and Columbia University found that deep artificial neural networks could explain the reaction process of the brain's object recognition region, the infratemporal cortex (IT). So they introduced a basic experimental method: comparing the artificial neural network with the brain. Then repeatedly and iteratively test various brain reaction processes: shape recognition, motion processing, speech processing, arm control, spatial memory, etc., and establish corresponding brain processing models for each reaction.

  1. Train the silicon based artificial neural network to solve tasks such as object recognition. The resulting network is called task oriented neural network. The point is that it usually only needs images, films and sounds to train the model, and does not need brain data.
  2. Use statistical analysis methods such as linear regression or representational similarity analysis to compare the intermediate activation value of the trained artificial neural network with the real brain value.
  3. Select the best model as the best model of the current brain region.

The real brain data in this method can be obtained by a single neuron, or collected by non-invasive techniques such as MEG or fMRI.

There are two key characteristics of the neural AI model in part of the brain. First of all, it is computable - providing the computer model with stimulus variables, it will be able to calculate how the corresponding brain regions will react. Secondly, it is also differentiable - it is a deep neural network, and we can use and study visual recognition and natural language processing models in the same way to optimize. In other words, neuroscientists can use all powerful tools to promote the deep learning revolution to do better research, including tensor algebra systems such as PyTorch and TensorFlow.

This means that we will achieve a huge technological leap - from never knowing most of the brain's operating mechanism to being able to model some downloadable regions.

Application fields of neural AI

Art and Advertising

We perceive all kinds of media 99% through eyes and ears. The eyes and ears themselves are not responsible for interpreting the experience, they are just sensors: our brains are processing and understanding this information. In the face of different media content, our brains will analyze different thoughts and feelings according to what we see and hear, but the results of analysis are not necessarily what the creators want to convey and are accepted by the audience.

Therefore, if you want to determine whether the information reserved in a work is received by the audience as expected, you need to constantly test. In some Internet companies, the popular solution is to use "A/B test". For example, Google has tested 50 different shades of blue for hyperlinks that display search results. Finally, the optimal solution they found increased Google's revenue by $200 million over the baseline, accounting for about 1% of Google's revenue at that time; Netflix will adjust the movie thumbnail for users to optimize the user experience.

But what if we can predict people's reaction to a certain media before we get any test data without going through mass traffic tests? In this way, enterprises can better optimize their written materials and websites before getting too much attention. Neural AI has done better and better in predicting people's response to visual materials. For example, Adobe researchers are studying related visual design tools to help designers better predict and guide people's attention. For example, edit photos to make them more memorable or aesthetic visually.

In addition, artificial neural networks can even find more effective ways to convey information than real images. OpenAI's CLIP tool can help you find images consistent with the feelings you want to convey; For example, OpenAI and Google can generate realistic images based on text prompts.

At present, there is a huge market demand for optimizing audio-visual media, websites, especially advertising. In fact, we have already started to introduce neural AI and algorithmic art into this process. The huge market demand will lead to a virtuous development cycle. As more and more resources are put into practical applications, neural AI will become better and more useful. As a by-product, other areas other than advertising will also benefit from better brain models.

Accessibility and Algorithm Design

One of the most exciting applications of neural AI is to improve product accessibility.

In fact, most media are designed for "ordinary people", but everyone has different ways to deal with audio-visual information. For example, people with color blindness and the general population have different information processing methods, so a large number of media are not suitable for them. Although there are many products that can simulate the effect of color blindness, a person with normal color vision is required to explain the product before making corresponding adjustments. Static color remapping directly cannot meet their needs, because some materials will change semantics after color remapping (for example, charts will become difficult to read). But with neural AI, we can automatically generate materials and websites suitable for color blindness reading while maintaining the existing graphic semantics.

Another example is helping people with learning disabilities, such as people with reading disabilities. A deep reason for dyslexia is that it is very sensitive to crowding, so it is difficult to identify shapes with similar basic features. MIT is developing a neural AI model of the visual system for dyslexics, which can help design fonts that are both beautiful and easy to read. These are potential and urgent improvements in the quality of life.

healthy

When many neuroscientists enter this field, they hope their research can have a positive impact on human health, especially for people suffering from nervous system diseases or mental health problems. Using the neural AI model, there is a chance to open new therapies - after obtaining an excellent brain model, you can carefully design the right stimulus to transmit the corresponding information, just like the matching of keys and locks. In this sense, the application of neural AI is similar to algorithmic drug design, but what we release in the human body is not small drug molecules, but images and sounds.

The problem of eye and ear receptors is most likely to be solved first, because these receptors have been well modeled. For example, cochlear implant surgery can optimize the stimulation mode of the implant and amplify the voice to optimize the hearing effect with the help of the neural AI brain model.

Many people will experience changes in the sensory system in their lives, such as myopia. After changes, people will adapt their brains to the world and better understand new perceptual information through continuous learning. This phenomenon is called perceptual learning. Neural AI can magnify this perceptual learning, so that people can quickly and effectively restore perceptual skills. Similarly, there are people who lose the ability to move their limbs smoothly after stroke through neural AI technology; Optimize the sensory experience of healthy people - for example, assist in the training of baseball players, archers or pathologists.

Finally, we found that these technologies can also make a big difference in the treatment of emotional disorders. We can treat emotional disorders through sensory experience. For example, we know that the use of electrical stimulation to control the activities of specific parts of the brain can alleviate the "treatment persistent depression". With neural AI, it may be possible to achieve similar effects by indirectly controlling brain activities through the senses.

Augmented reality

One technology that will make neural AI applications more powerful is AR glasses. Because it can be perfectly integrated into our daily life, AR technology has the potential to become an ubiquitous computing platform. Many technology giants and Internet giants are speeding up their research on higher level AR eyes, so there has been a huge impetus to boost their development in terms of supply. This will enable people to widely use a display device that is much more powerful than today's static screens.

If we refer to the development track of VR devices, it will eventually integrate eye tracking functions. In other words, we can go far beyond the current possible technical means to achieve a wider range of visual stimuli in a more controllable way. In addition, these devices will also have a very far-reaching application prospect in the health field.

Brain computer interface (BCI)

With excellent monitors (images) and speakers (sounds), we can precisely control the main input signals of the brain. The next more powerful stage of transmitting stimuli through the senses is to verify whether the brain reacts in the expected way through the read-only Brain Computer Interface (BCI). In this way, we can assess the impact of stimulation on the brain, and if it does not meet expectations, we can also make corresponding adjustments in the so-called closed-loop control.

We do not need to implant a chip or deep brain stimulator in the brain, because such a simple assessment is enough to measure brain activity in a non-invasive way outside the skull. We don't need to stimulate the brain directly through BCI, and glasses and headphones can control most of the brain's input. At present, many non-invasive read-only BCIs have been commercialized or are in preparation for use in closed-loop control. Some examples include:

  • Electroencephalogram (EEG). EEG measures the electrical activity of the brain outside the skull. Because the skull is equivalent to a volume conductor, the temporal resolution of EEG is very high, but the spatial resolution is very low. When we gain control of the stimulus, EEG can play a more powerful role - for example, we can associate the stimulus with the EEG signal, and then decode what stimulus can attract attention (evoked potential method).
  • Functional magnetic resonance imaging (fMRI). FMRI measures tiny changes in blood oxygen content related to neural activity, and it is the only technology that can non invasively read deep brain activities in a spatially accurate manner. For closed-loop neural control, there are two relatively mature paradigms. The first is biofeedback based on fMRI; The second is the cortical map. Both methods show that it is possible to evaluate the stimulation effect of neural AI on the brain.
  • Near infrared functional brain imaging (fNIRS). FNIRS measures the cerebral blood volume between the transmitter and the receiver through diffuse light. The spatial resolution of traditional near-infrared imaging is low, but it will be improved through time gating (TD-NIRS) and large-scale oversampling (diffuse optical tomography). In academic aspect, Joe Culver's team of WUSTL has realized the video decoding of visual cortex. In terms of business, Kernel is now manufacturing and selling TD-NIRS helmets, which is an amazing engineering feat. This is an area that people are constantly promoting and making rapid progress.
  • Magnetoencephalography (MEG). MEG locates brain activity by measuring small changes in magnetic fields. MEG is similar to EEG in that it measures the change of electromagnetic field, but it is not affected by the volume conductor, so it has better spatial resolution. People are also making continuous progress in optically pumped magnetometers (OPM). In the future, we will probably buy a single OPM sensor in the open market.

In addition to these well-known technologies, some dark horse technologies, such as digital holography, photoacoustic tomography and functional ultrasound, may greatly accelerate the paradigm shift in this field.

Although consumer level non-invasive BCI is still in its infancy, the strong demand around AR use cases will continue to drive the market to make the cake bigger. We may see the rapid development of low dimensional BCI, and the above mentioned neural AI applications are likely to become a real reality.