How do we represent information that has no sensory features? How are abstract concepts like "freedom", devoid of external perceptible referents, represented in the brain? To address the role of sensory information in the neural representation of concepts, we investigated how people born blind process concepts whose referents are imperceptible to them because of their visual nature (e.g. "rainbow", or "red"). We find that the left dorsal anterior temporal lobe (ATL) shows preference both to typical abstract concepts ("freedom") and to concepts whose referents are not sensorially-available to the blind ("rainbow"), as compared to partially sensorially-perceptible referents (e.g. "rain"). Activation pattern similarity in dorsal ATL is related to the sensorial-accessibility ratings of the concepts in the blind. Parts of inferior-lateral aspects of ATL and the temporal pole responded preferentially to abstract concepts devoid of any external referents ("freedom") relative to imperceptible objects, in effect distinguishing between object and non-object concepts. The medial ATL showed a preference for concrete concepts ("cup"), along with a preference for partly perceptible items to the blind ("rain", as compared with "rainbow"), indicating this region's role in representing concepts with sensory referents beyond vision. The findings point to a new division of labor among medial, dorsal and lateral aspects of ATL in representing different properties of object and non-object concepts.
What forces direct brain organization and its plasticity? When brain regions are deprived of their input, which regions reorganize based on compensation for the disability and experience, and which regions show topographically constrained plasticity? People born without hands activate their primary sensorimotor hand region while moving body parts used to compensate for this disability (e.g., their feet). This was taken to suggest a neural organization based on functions, such as performing manual-like dexterous actions, rather than on body parts, in primary sensorimotor cortex. We tested the selectivity for the compensatory body parts in the primary and association sensorimotor cortex of people born without hands (dysplasic individuals). Despite clear compensatory foot use, the primary sensorimotor hand area in the dysplasic subjects showed preference for adjacent body parts that are not compensatorily used as effectors. This suggests that function-based organization, proposed for congenital blindness and deafness, does not apply to the primary sensorimotor cortex deprivation in dysplasia. These findings stress the roles of neuroanatomical constraints like topographical proximity and connectivity in determining the functional development of primary cortex even in extreme, congenital deprivation. In contrast, increased and selective foot movement preference was found in dysplasics’ association cortex in the inferior parietal lobule. This suggests that the typical motor selectivity of this region for manual actions may correspond to high-level action representations that are effector-invariant. These findings reveal limitations to compensatory plasticity and experience in modifying brain organization of early topographical cortex compared with association cortices driven by function-based organization.
Cortex plasticity after hand amputation is considered harmful, causing phantom limb pain. A new study shows that cortical overtake can occur instead in a compensatory manner in people born with one hand, for multiple body parts used to overcome disability.
The visual occipito-temporal cortex is composed of several distinct regions specialized in the identification of different object kinds such as tools and bodies. Its organization appears to reflect not only the visual characteristics of the inputs but also the behavior that can be achieved with them. For example, there are spatially overlapping responses for viewing hands and tools, which is likely due to their common role in object-directed actions. How dependent is occipito-temporal cortex organization on object manipulation and motor experience? To investigate this question, we studied five individuals born without hands (individuals with upper limb dysplasia), who use tools with their feet. Using fMRI, we found the typical selective hand–tool overlap (HTO) not only in typically developed control participants but also in four of the five dysplasics. Functional connectivity of the HTO in the dysplasics also showed a largely similar pattern as in the controls. The preservation of functional organization in the dysplasics suggests that occipito-temporal cortex specialization is driven largely by inherited connectivity constraints that do not require sensorimotor experience. These findings complement discoveries of intact functional organization of the occipito-temporal cortex in people born blind, supporting an organization largely independent of any one specific sensory or motor experience.
Congenital deafness causes large changes in the auditory cortex structure and function, such that without early childhood cochlear-implant, profoundly deaf children do not develop intact, high-level, auditory functions. But how is auditory cortex organization affected by congenital, prelingual, and long standing deafness? Does the large-scale topographical organization of the auditory cortex develop in people deaf from birth? And is it retained despite cross-modal plasticity? We identified, using fMRI, topographic tonotopy-based functional connectivity (FC) structure in humans in the core auditory cortex, its extending tonotopic gradients in the belt and even beyond that. These regions show similar FC structure in the congenitally deaf throughout the auditory cortex, including in the language areas. The topographic FC pattern can be identified reliably in the vast majority of the deaf, at the single subject level, despite the absence of hearing-aid use and poor oral language skills. These findings suggest that large-scale tonotopic-based FC does not require sensory experience to develop, and is retained despite life-long auditory deprivation and cross-modal plasticity. Furthermore, as the topographic FC is retained to varying degrees among the deaf subjects, it may serve to predict the potential for auditory rehabilitation using cochlear implants in individual subjects.
Evidence of task-specific sensory-independent (TSSI) plasticity from blind and deaf populations has led to a better understanding of brain organization. However, the principles determining the origins of this plasticity remain unclear. We review recent data suggesting that a combination of the connectivity bias and sensitivity to task-distinctive features might account for TSSI plasticity in the sensory cortices as a whole, from the higher-order occipital/temporal cortices to the primary sensory cortices. We discuss current theories and evidence, open questions and related predictions. Finally, given the rapid progress in visual and auditory restoration techniques, we address the crucial need to develop effective rehabilitation approaches for sensory recovery.
Is visual input during critical periods of development crucial for the emergence of the fundamental topographical mapping of the visual cortex? And would this structure be retained throughout life-long blindness or would it fade as a result of plastic, use-based reorganization? We used functional connectivity magnetic resonance imaging based on intrinsic blood oxygen level-dependent fluctuations to investigate whether significant traces of topographical mapping of the visual scene in the form of retinotopic organization, could be found in congenitally blind adults. A group of 11 fully and congenitally blind subjects and 18 sighted controls were studied. The blind demonstrated an intact functional connectivity network structural organization of the three main retinotopic mapping axes: eccentricity (centre-periphery), laterality (left-right), and elevation (upper-lower) throughout the retinotopic cortex extending to high-level ventral and dorsal streams, including characteristic eccentricity biases in face- and house-selective areas. Functional connectivity-based topographic organization in the visual cortex was indistinguishable from the normally sighted retinotopic functional connectivity structure as indicated by clustering analysis, and was found even in participants who did not have a typical retinal development in utero (microphthalmics). While the internal structural organization of the visual cortex was strikingly similar, the blind exhibited profound differences in functional connectivity to other (non-visual) brain regions as compared to the sighted, which were specific to portions of V1. Central V1 was more connected to language areas but peripheral V1 to spatial attention and control networks. These findings suggest that current accounts of critical periods and experience-dependent development should be revisited even for primary sensory areas, in that the connectivity basis for visual cortex large-scale topographical organization can develop without any visual experience and be retained through life-long experience-dependent plasticity. Furthermore, retinotopic divisions of labour, such as that between the visual cortex regions normally representing the fovea and periphery, also form the basis for topographically-unique plastic changes in the blind.
We live in a society based on vision. Visual information is used for orienting in our environment, identifying objects in our surroundings, alerting us to important events which require our attention, engaging in social interactions, and many more functions that are necessary to efficiently function in everyday life. Thus, the loss of vision decreases the quality of life and poses a severe challenge to efficient functioning for millions of individuals worldwide.
Despite some medical progress, the restoration of visual information to the blind still faces multiple technical and scientific difficulties. “Bionic eyes” or visual prostheses are being developed mostly for specific blindness etiologies and target only a subpopulation of the visually impaired. Even these devices have yet to reach the stage where the technology can provide high-resolution, detailed visual information.
More importantly, these approaches take for granted the ability of the human brain, following long-term or even life-long blindness, to interpret vision once the input from the eyes becomes available.
The current scientific consensus regarding the development of the visual cortex is that visual deprivation during critical or sensitive periods in early development may result in functional blindness, as the brain is not organized to process visual information properly, and this may be irreversible later in life. In fact, the rare reported cases of late-onset surgical sight restoration (by means of cataract removal in early blind patients) show severe visual impairments that persist even following long-term exposure to vision. Furthermore, studies of early-onset and congenitally blind people have shown that their visual cortex may have plastically reorganized to process information from other sensory modalities. Recent studies showed that even short term visual deprivation in adulthood may cause some functional chances in the visual system. Thus, sight restoration may indeed be severely limited by the reorganization of the visual cortex.
In this dissertation I test this theory by using an alternative approach to visual rehabilitation, in which the visual information is conveyed non-invasively using the remaining senses of the blind. We used a sensory substitution device (SSD) that translates visual information using a consistent algorithm to sounds (The vOICe). Because SSD soundscape translations of natural visual input are very complex, we developed a structured training protocol (Striem-Amit et al., 2012b) where congenitally blind people are gradually taught how to interpret the sounds carrying the visual information. This training paradigm also made it possible to test whether the congenitally blind can learn to perceive complex visual information without having had visual experience during early infancy, and to better identify the neural correlates of processing such information in the blind.
Specifically, this dissertation aimed to study:
1. Whether and how SSDs may be applied for visual rehabilitation to reach sufficient practical visual acuity and functional abilities. Can we restore complex visual capacities such as object categorization (a visual ability which requires feature binding within visual objects as well as their segregation from their background) beyond the critical developmental period in the congenitally blind?
2. How are such visual-in-nature artificially-constructed stimuli processed in the blind brain? Can we find evidence for the functional specializations of the normal visual cortex in the absence of visual experience during the critical periods of early development?
We found that the blind were able to learn to perceive high-acuity visual information, and could even exceed the Snellen acuity test threshold of the World Health Organization for blindness (Striem-Amit et al., 2012d) and the 'visual' acuity possible using any other current means of visual rehabilitation. Furthermore, they were able to perceive and categorize images of visual categories, and carry out certain visual tasks (Striem-Amit et al., 2012b).
A neuroimaging investigation of the processing of SSD information showed that despite their lack of visual experience during development, the visual cortex of the congenitally blind was activated during the processing of soundscapes (images represented by sounds). More importantly, its activation pattern mimicked the task- and category- selectivities of the normally developed visual cortex. Specifically, we found that the blind showed a double-dissociation between processing image shape and location in the ventral and dorsal processing streams (Striem-Amit et al., 2012c), which constitutes the large-scale organization principle in the visual cortex. Furthermore, we found that within the ventral stream, category-selectivity for one visual category over all others tested can be seen in the visual word-form area (VWFA) which, as in the normally sighted, showed a robust preference for letters over textures and other visual categories (Striem-Amit et al., 2012b).
In both studies, the visual cortex showed retention of functional selectivity despite the a-typical auditory sensory-modality input, the lack of visual experience, the limited training duration (dozens of hours) and the fact that such training was applied only in adulthood. These findings support a controversial organization theory which suggests that instead of being divided according to the sensory modalities which elicit it, the cortex area may be better defined by the tasks or computations it conducts, whereas the input sense organ is irrelevant. Specifically, this model suggests that a combination of top-down connectivity with an innate preference for computation- type may generate the same task-selectivities even in the absence of bottom-up visual input. This theory has interesting bearings on the ability to restore sight later in life, as it suggests that the blind brain may not have lost its ability to process some aspects of visual information and may learn to do so if this information is delivered to the brain either via SSDs, as we show possible here, or using other more invasive means.
Vision is by far the most prevalent sense for experiencing others' body shapes, postures, actions, and intentions, and its congenital absence may dramatically hamper body-shape representation in the brain. We investigated whether the absence of visual experience and limited exposure to others' body shapes could still lead to body-shape selectivity. We taught congenitally fully-blind adults to perceive full-body shapes conveyed through a sensory-substitution algorithm topographically translating images into soundscapes . Despite the limited experience of the congenitally blind with external body shapes (via touch of close-by bodies and for ~10 hr via soundscapes), once the blind could retrieve body shapes via soundscapes, they robustly activated the visual cortex, specifically the extrastriate body area (EBA; ). Furthermore, body selectivity versus textures, objects, and faces in both the blind and sighted control groups was not found in the temporal (auditory) or parietal (somatosensory) cortex but only in the visual EBA. Finally, resting-state data showed that the blind EBA is functionally connected to the temporal cortex temporal-parietal junction/superior temporal sulcus Theory-of-Mind areas . Thus, the EBA preference is present without visual experience and with little exposure to external body-shape information, supporting the view that the brain has a sensory-independent, task-selective supramodal organization rather than a sensory-specific organization.
A key question in sensory perception is the role of experience in shaping the functional architecture of the sensory neural systems. Here we studied dependence on visual experience in shaping the most fundamental division of labor in vision, namely between the ventral "what" and the dorsal "where and how" processing streams. We scanned 11 fully congenitally blind (CB) and 9 sighted individuals performing location versus form identification tasks following brief training on a sensory substitution device used for artificial vision. We show that the dorsal/ventral visual pathway division of labor can be revealed in the adult CB when perceiving sounds that convey the relevant visual information. This suggests that the most important large-scale organization of the visual system into the 2 streams can develop even without any visual experience and can be attributed at least partially to innately determined constraints and later to cross-modal plasticity. These results support the view that the brain is organized into task-specific but sensory modality-independent operators.
Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology.
Sensory Substitution Devices (SSDs) convey visual information through sounds or touch, thus theoretically enabling a form of visual rehabilitation in the blind. However, for clinical use, these devices must provide fine-detailed visual information which was not yet shown for this or other means of visual restoration. To test the possible functional acuity conveyed by such devices, we used the Snellen acuity test conveyed through a high-resolution visual-to-auditory SSD (The vOICe). We show that congenitally fully blind adults can exceed the World Health Organization (WHO) blindness acuity threshold using SSDs, reaching the highest acuity reported yet with any visual rehabilitation approach. This demonstrates the potential capacity of SSDs as inexpensive, non-invasive visual rehabilitation aids, alone or when supplementing visual prostheses.
In sensory substitution devices (SSDs), visual information captured by an artificial receptor is delivered to the brain using non-visual sensory information. Using an auditory-to-visual SSD called "The vOICe" we previously reported that blind individuals perform successfully on object recognition tasks and are able to recruit specific ventral 'visual' structures for shape recognition using the device (i.e. through soundscapes). Comparable recruitment was also observed in sighted individuals learning to use this device. Here we directly compare a group of seven subjects who learned to perform object recognition via soundscapes and a group of seven subjects who learned arbitrary associations between sounds and object identity. We contrast these two groups’ brain activity for object recognition using SSD, and for auditory object and scrambled object soundscapes. We show that the most critical structures specific for shape extraction for the purpose of object recognition are the left Pre-Central Sulcus (PCS) and the bilateral Lateral-Occipital Complex (LOC). We also found significant activation in the occipito-parietal and posterior occipital cortex not previously observed using a smaller sample of subjects. These results support the notion that interactions between visual structures and a network of additional areas, specifically in prefrontal cortex (PCS) might underlie the machinery which is most critical for achieving multisensory or metamodal shape recognition.
In order to capture the “magic” of these rehabilitation approaches and illustrate how surprisingly efficient they might be if proper training is applied, we will begin this chapter by presenting some of these exciting new solutions and briefly discuss the rehabilitation outcomes currently associated with them. To better understand the mechanisms mediating such outcomes and appreciate the remaining challenges that need to be overcome, in the second part of the chapter we provide a more theoretical illustration of neuroplastic changes associated with the use of these devices. In particular, we show that these changes are not “magic” nor in any way restricted to the use of the presented rehabilitation techniques. On the contrary, these techniques are designed in order to exploit and channel the brain’s natural potential for change. This potential is present in all individuals, but may become somewhat more accentuated in the brains of the sensory-impaired, as the lack of one sensory modality leaves vast cortical regions free of their typical input and triggers a reorganization of such cortices and their integration into other brain networks. This reorganization is constrained and channeled by the individual’s own activity, information available from the environment, as well as intrinsic properties of the neural system promoting or limiting such changes during different periods in life. Importantly, such restructuring is crucial for enabling the cognitive changes that also occur after sensory loss, allowing the sensory-impaired individuals to efficiently function in their environment. Specifically, successfully dealing with sensory impairment often results in collateral benefits, which include better differentiation and higher efficiency of nonvisual sensory or other cognitive functions. Many of the neural and cognitive changes triggered by sensory loss will be reviewed in the second part of the chapter, illustrating how they rely on the same mechanisms as those underlying the successful outcomes of novel rehabilitation techniques, which will now be presented.
The primary sensory cortices are characterized by a topographical mapping of basic sensory features which is considered to deteriorate in higher-order areas in favor of complex sensory features. Recently, however, retinotopic maps were also discovered in the higher-order visual, parietal and prefrontal cortices. The discovery of these maps enabled the distinction between visual regions, clarified their function and hierarchical processing. Could such extension of topographical mapping to high-order processing regions apply to the auditory modality as well? This question has been studied previously in animal models but only sporadically in humans, whose anatomical and functional organization may differ from that of animals (e.g. unique verbal functions and Heschl's gyrus curvature). Here we applied fMRI spectral analysis to investigate the cochleotopic organization of the human cerebral cortex. We found multiple mirror-symmetric novel cochleotopic maps covering most of the core and high-order human auditory cortex, including regions considered non-cochleotopic, stretching all the way to the superior temporal sulcus. These maps suggest that topographical mapping persists well beyond the auditory core and belt, and that the mirror-symmetry of topographical preferences may be a fundamental principle across sensory modalities.
Neuroplasticity, or the brain’s ability to modify its structure and function at all levels, is variable over the course of life. Although it is most pronounced during early development, there is a growing consensus that a remarkable degree of flexibility is retained even during adulthood. In this chapter we explore the topic of brain plasticity, with special emphasis on large-scale plasticity following sensory loss and the potential for later rehabilitation. We concentrate on vision and blindness because visual functions, which are so important to humans, are subserved by vast parts of the cerebral cortex which become substantially reorganized to compensate for the lack of vision. This compensation manifests itself in different types of neuroplastic changes reflected in the altered cognitive functions and abilities observed in the blind. Understanding and controlling the mechanisms underlying these changes can have major clinical implications, as these may strongly influence the outcomes and success rates of visual rehabilitation. Currently the best hopes for regaining functional vision are provided by rehabilitation methods employing sensory substitution devices (SSDs) which supply visual information to the blind through other (auditory or tactile) modalities, and more invasive sensory restoration techniques which attempt to convey visual information directly to the visual pathways. These techniques can be exploited fully only through a solid understanding of the effects, maximal potential, and limits of brain plasticity. By attempting to send visual information to a “visual” cortex that has already been reorganized following the onset of blindness and teaching this area how to “see,” these methods rely on our ability to understand, channel, and control the mechanisms which enabled the brain to make its original adaptation to lost sensory input.
People tend to close their eyes when trying to retrieve an event or a visual image from memory. However the brain mechanisms behind this phenomenon remain poorly understood. Recently, we showed that during visual mental imagery, auditory areas show a much more robust deactivation than during visual perception. Here we ask whether this is a special case of a more general phenomenon involving retrieval of intrinsic, internally stored information, which would result in crossmodal deactivations in other sensory cortices which are irrelevant to the task at hand. To test this hypothesis, a group of 9 sighted individuals were scanned while performing a memory retrieval task for highly abstract words (i.e., with low imaginability scores). We also scanned a group of 10 congenitally blind, which by definition do not have any visual imagery per se. In sighted subjects, both auditory and visual areas were robustly deactivated during memory retrieval, whereas in the blind the auditory cortex was deactivated while visual areas, shown previously to be relevant for this task, presented a positive BOLD signal. These results suggest that deactivation may be most prominent in task-irrelevant sensory cortices whenever there is a need for retrieval or manipulation of internally stored representations. Thus, there is a task-dependent balance of activation and deactivation that might allow maximization of resources and filtering out of non relevant information to enable allocation of attention to the required task. Furthermore, these results suggest that the balance between positive and negative BOLD might be crucial to our understanding of a large variety of intrinsic and extrinsic tasks including high-level cognitive functions, sensory processing and multisensory integration.
In the absence of vision, perception of space is likely to be highly dependent on memory. As previously stated, the blind tend to code spatial information in the form of "route-like" sequential representations [1-3]. Thus, serial memory, indicating the order in which items are encountered, may be especially important for the blind to generate a mental picture of the world. In accordance, we find that the congenitally blind are remarkably superior to sighted peers in serial memory tasks. Specifically, subjects heard a list of 20 words and were instructed to recall the words according to their original order in the list. The blind recalled more words than the sighted (indicating better item memory), but their greatest advantage was in recalling longer word sequences (according to their original order). We further show that the serial memory superiority of the blind is not merely a result of their advantage in item recall per se (as we additionally confirm via a separate recognition memory task). These results suggest the refinement of a specific cognitive ability to compensate for blindness in humans.