Typographics

From Serial Experiments Lain wiki
Revision as of 02:49, 1 November 2021 by Prince (talk | contribs)
Jump to: navigation, search
Metaphorlize.jpg

Typographics is defined in Visual Experiments Lain as a compound word comprised of the words type and graphic. It is used to describe montage scenes of the anime in which words float or scroll across the screen, usually in accompaniment to narration or crosstalk, or during scenes in which Lain is diving in the Wired. Such scenes were created by Ueda Yasuyuki using Adobe After Effects and other programs.

For the purposes of this article, typographics means any situation where text is used for primarily stylistic effects. Below, the text from a few of the scenes has been transcribed and sourced for the reader's interest. Reading the text probably won't contribute to your understanding of Lain, because it's merely decorative or stylistic, having little to no relevance to the events of the scenes it appears in.

Accela

In Layer 02, there is a scene where a narrator explains the workings of accela while digital graphics are displayed. During this scene, small, nearly transparent blocks of text scroll up the screen. It comes from two distinct sources and reads as follows, with line breaks and typographical errors for the most part preserved:

Anti-VEGF
Humanized Monoclonal Antibody
The anti-VEGF antibody is an inhibitor of angiogenesis (blood-vessel growth) that may hinder the growth of cancer tumors by starving their blood supply. Genentech is investigating this antibody in Phase II


Vascular endothelial growth factor (VEGF) is a natural protein that promotes angiogenesis (blood vessel growth). VEGF couldpotentially benefit patients who have a heart that is functioning but has a blocked blood supply due to artherioscleroticcoronar

The above excerpt is from the web site of Genentech circa 1998.[1]


Smart materials would be made of nanomachines, 
typically microscopic -- with features any size,
down to atomic dimensions. Such machines would
have more or less, the same components as macro,
or familiar "normal" sized machines with
recognizable gears, 
bearings, motors, levers and belts... (except for all the nanocomputers).
This is somewhat helpful to the engineer designing smart materials with a myriad of functions like shape changing and distributing fluids and gas -- say for environmental control in a paper thin space suit that actively moves with the body or Drexler's smart paint. Open a can and splat some on a wall. The paint spreads itself across the surface using microscopic machines and changes color on command or becomes a wall sized 3-D television... Then again, the whole wall may as well be smart material changing texture or windows on command.
The point here... one can visualize the machines needed to do such a job: little tractors with sticky wheels, connection struts and cables to other machines. Actually, most of this can be done today, only on a much larger scale and at great expense (this is where the novel economics of self replicating machines plugs in). The transition for an engineer, is using more machines with much smaller parts and the luxury of vast computing power. These differences yield more great utility.
Gears made of Buckytubes are great nanomachine components... Buckytubes are carbon graphite sheets rolled into a tube (looks like tubes of chicken wire), and are "like" carbon in its diamond form, but with ALL available bonding strength aligned on one axis. These tubes are stronger than diamond fiber, and the strongest fiber possible with matter, so we're starting out with real racehorse material. Globus and Team designs are chemically stable, very tough and varied in geometry, including gears mad from "nested" Buckytubes or tubs inside of tubes. Such a gear would be stiffer and suited for a "long" drive shaft. And talk about performance...

It's difficult to determine the original source for this, or where it was published, but it appears to be from an article called "Nanotechnology: Magic of Century 21st," a kind of introduction to nanotechnology. [2]

There is also a background image in this scene that originated from a site called Urban Diary. [3]


KIDS

During the scene in Layer 06 where Professor Hodgson explains KIDS to Lain, the following scrolls by in very small and difficult-to-read print:

Our initial body of utterances was collected 
with a program that periodically called staff
members and asked them to say 5
names selected at random from 64 full
Japanese names (surname followed by first name).
Using this program, 684 utterances
were recorded from 47 native Japanese speakers
(3/4 of which were male) and tagged with the
utterance transcription.
The utterances were represented as Bark-scale 
power spectra of 20 ms speech frames, Hamming
windowed at 5 ms shifts. The
utterances were time synchronously phoneme
labeled using their transcriptions in an 
automated process. The results were
manually checked and adjusted to correct
any missegmentations.
From this data we generated our initial models as described above and used them to bring the automated attendant system online. The system, open to about 100 users, ran as described in section 3, and after some months we had collected over 350 additional utterances. The newly collected utterances were briefly checked and a few mislabeled ones were deleted.
Even with the new utterances, this is not a large data set (especially considering that the task is multi-speaker, and recorded over telephone lines), but we nonetheless performed the following experiments to assess the effects of incremental retraining. The 350 new utterances were added in 4 stages (preserving their temporal sequence) to the initial set of 684 (e.g. 684+87, 684+175, ...). At each stage one third of all the utterances were selected at random and held out for testing. The remaining two thirds became the training data, from which a new set of models was made using the three step procedure outlined above.
At each stage we made 2 tests. The first checked basic recognition accuracy when new models were generated from the expanded training data and the new testing data was incorporated into the test set. The second used the new testing data but no new training data in order to check how well the original models generalized to unseen data. These two tests were conducted on both the models produced by embedded k-means clustering (step 2 above) and on the models after minimum error training (step 3 above). Results for these tests are shown in Figure 2.

The above is an excerpt from a journal article on the development of a computerized voice recognition telephone operator. [4]

Infornography

The following can be seen a few minutes into Layer 11.

he principles and the organization o
where. This effort is truly interdiscip
ogy, chemistry and physics to comp
large part of Artificial Life is devote
s we know it - that is life on earth -
earch for principles of living system
rticlular subtrate. Thus, Artificial Lif
exploring artificial alternatives to a

described as attempting to underst vel rules; for example, how the simp lead to high-level structure, or the etween ants and their environment ior. Understanding this relatinoship provide novel solutions to complex ention, stock-market prediction, and

iving systems out of non-living part l the areas of Artificial Life. At prese two largely independent endeavors: al building blocks of nature (carbon sing the same principles but a differ computer. The former explores the ng to construct self-replicating mole populations of self-replicating enti eristics of different chemistries in su us, both the biochemical and the co hed light on the compelling questio

Then the scene changes and the scrolling text begins to repeat itself.

Life" is used to describe research int
some of the essential properties of
such systems that meet this criterio
nical--and these can be used to per
he principles and the organization o
where. This  effort is truly interdiscip
ogy, chemistry and physics to comp
large part of Artificial Life is devote
s we know it - that is, life on earth -
earch for principles of living system
rticular substrate. Thus, Artificial Lif
exploring artificial alternatives to a 

described as attempting to underst vel rules; for example, how the simp lead to high-level structure, or the etween ants and their environment ior. Understanding this relatinoship provide novel solutions to complex ention, stock-market prediction, and

iving systems out of non-living part l the areas of Artificial Life. At prese two largely independent endeavors: al building blocks of nature (carbon sing the same principles but a differ computer. The former explores the ng to construct self-replicating mole populations of self-replicating enti eristics of different chemistries in su us, both the biochemical and the co hed light on the compelling questio

ot only about the construction and her artificial or natural; an impressive rds the construcion of adaptive aut m the classical robotics approach, in its environment and learns from thi

All of this is from a description of artificial life found on alife.org. [5]

External Links

Related Pages