EFTA02576801.pdf

DataSet-11 2 pages 477 words document
👁 1 💬 0
📄 Extracted Text (477 words)
From: Joscha Bach Sent: Wednesday, October 23, 2013 2:41 PM To: Jeffrey Epstein Cc: Joi Ito; Kevin Slavin; An Gesher; takashi ikegami; Martin Nowak; Greg Borenstein Subject: Re: MDF Am 22.10.2013 urn 16:01 schrieb Jeffrey Epstein <[email protected]>: > I would add the possiblity that each differentiated input has its own =ncrypted algorithm. and looking at it from too high an altitude =rovides little info about each one..i.e. optic nerve encrption =ifferent than nasal receptors . maybe even a one time code . that =Bows only the individual to access certain stored info. Indeed! Each individual will form its own code, for each modality. On =he other hand, these codes do not simply diverge, but they are the =esult of the individual's adaptation to its own (changing, =eveloping, deteriorating) physiology. The nervous system is designed to =xtract structure based on the statistical properties of the input, and =o compensate for defects. For instance, replacing the fine-grained =nput provided by the many receptors of the cochlea with a crude implant =today's models sample only a handful of frequencies) will usually =esult in a subjective experience of continuous auditory perception; =plicing the data of a few pixels into the optic nerve of a blind person =ay allocate those pixels their correct positions within the visual =ield. An interesting question: what are the limits of the plasticity of =he sensory modalities? For instance, could we switch modalities to some =xtent? More than hundred years ago, Stratton did a famous experiment, where he =ore glasses that turned the world upside down (using prisms). After a =ew days, his brain adapted and he would perceive everything as being =pright again. An experiment that I would like to see one day (and of which I am not =ware if someone already tried it): equip a subject with an augmented reality display, for instance Google Glass, and continuously feed a =isual depiction of auditory input into a corner of the display. The =nput should transform the result of a filtered Fourier analysis of the =ounds around the subject into regular colors and patterns that can =asily be discerned visually. At the same time, plug the ears of the subject (for instance, with noise canceling earplugs and white noise). rith a little training, subjects should be able to read typical patterns =for instance, many phonemes) consciously from their sound overlay. But =fter a few weeks: Could a portion of the visual cortex adapt to the =tatistical properties of the sound overlay so completely that the =ubject could literally perceive sounds via their eyes? Could we see =usic? Could we make use of induced synesthesia to partially replace a =ost modality? Cheers, Joscha=?xml version=.0" encoding=TF-8"?> <IDOCTYPE plist PUBLIC IV/Apple/MID PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version=.0"> <dict> <key>conversation-idgkey> <integer>270440</integer> <key>date-last-viewed</key> <integer>0</integer> <key>date-received</key> <integer>1382S39286</integer> <key>flags</key> <integer>86237S0145</integer> EFTA_R1_01749127 EFTA02576801 <key>gmail-label-ids</key> <array> <integer>6</integer> <integer>2</integer> </array> <key>remote-id</key> <string>354468</string> </dict> </plist> 2 EFTA_R1_01749128 EFTA02576802
ℹ️ Document Details
SHA-256
6a9ce5fcda4d6f9474c3c705dbe7d39d83b24453c5a3f369bcef7dd7810fdf68
Bates Number
EFTA02576801
Dataset
DataSet-11
Type
document
Pages
2

Community Rating

Sign in to rate this document

📋 What Is This?

Loading…
Sign in to add a description

💬 Comments 0

Sign in to join the discussion
Loading comments…
Link copied!