Some time ago I was asked what “analog” means, as opposed to “digital”, when discussing media formats (like documents, images, videos, etc). The person asked me over instant message chat - so the discussion was captured and I can now easily share it. The discussion is lightly edited for clarity and privacy.
My statements are formatted normally; the asker’s in block quotes.
Can you please clarify “analog” in the following sentence?
“The line that formerly divided everyday analog or traditional activities and specialized digital projects has eroded, and the creation of digital image collections is now an integral and expected part of the workflow of museums and other cultural heritage organizations.”
What does the term “analog” actually mean?
In this context, analog is intended as a distinction from digital. For a start, everything we can directly sense (with human senses) is analog.
Generally, analog sources are captured as digital translations because digital is easier to copy and transfer without loss.
Analog means, roughly, continuous.
So, what does being continuous have to do with human senses?
This is an area where language is poorly developed because most people never think about these things. Because our senses are all we directly experience, people never needed much of a vocabulary to describe it and, for example, low-frequency radio waves used in astrotelegraphy.
For a start, analog and digital are mutually exclusive.
Digital is a way of quantizing (or bucketing and counting) things that are otherwise continuous.
Have a look at this image – The chart itself doesn’t matter, I’m just choosing something for the sake of discussion. But note that we’re counting discrete things (arrivals) at a frequency per regular period.
What we choose to count and how frequently we count them depends on our purpose, but by doing that, we can toss out the information we don’t care about (like exactly when those 2 people per minute arrived) and instead focus on how often, relatively, 2 people arrive as opposed to 3 people per minute. That chart represents about 1000 data points, but it represents them in a way that abstracts from the details we choose to treat as irrelevant.
How can the chart itself not matter when that’s all that’s on the page that you sent? What should I be looking at?
I’m saying that I chose this chart at random, except for the fact that it’s a histogram, which I’m using as an explanatory metaphor for digital.
Right, so let’s back up a minute. If you sat in a hall at school and observed people walking by, there are an infinite # of things you might observe.
You might see the shoes, or the stride, or the number of people in clusters, or whether they were chewing gum.
You would forget those details you weren’t specifically seeking, though.
The world as you observe it is analog - there are signals available, but irrelevant to the purpose – in this case, how often do people arrive?
If you tried to describe the hallway to somebody, it would be what you could recall, and it would necessarily be lossy.
Lossy? Is that geek-talk?
Sorry, an adjective used to describe compression, whether analog or digital.
You would not be able to fully convey all that you had observed. Nor would you try because of the limits of time, your memory, etc.
You mean selectivity?
The telephone game is an example of lossy communication. What would the telephone game be like if you wrote down what you heard before you repeated it? No loss, right?
Okay. Minimal loss. Okay. I’m with you still.
So now, one way you might count arrivals is to keep a stopwatch and a log of times of arrival, one time per arrival.
Can we go back to the hallway scenario?
Yes, I’m still on that. You’re in the hallway, but now instead of watching, you have a context and purpose.
Okay. So I’m in the hallway and I’m counting people as they arrive.
All other details are irrelevant to that goal: Just time and person. Right?
One way you might record the needed information is to video tape it.
Or to mark a line on a tape that travelled by at a steady rate, so that distance on the line marked time. Does that make sense?
Stepping back from this example, there are lots of details in the natural world that are irrelevant for any given task. There is noise on the phone line that is not for the purpose of communication.
There is static on the record being played.
There is moisture in the air between the speaker and your ear. There is wind, pressure, angle of your ear to the speaker. All of these things affect what you hear, but are not the sound you are interested in. Okay?
Still with you.
So then, just as the idea of marking times of arrival in a log is a way to discount all irrelevant information for the goal of counting arrivals, digital formats are a way of discarding information unneeded for the context and task.
So translate that into images - I don’t see how that relates to visual.
Well, that’s a step to far yet. Keep in mind, this idea is generally applicable.
Okay, you mean digital is generally applicable, right?
The process of excluding task distractions is digital? In sound, for example, it’s about excluding sound that you don’t want?
Hmmm. Not quite, let’s separate this a bit. Digital is a specific way of excluding distractions.
Are there other ways?
There are other ways. It is possible to do this with analog as well, but I am suggesting the motivation for digital.
Okay, keep going.
So, digital formats accomplish this discounting by describing a specific format for recording the data with intention.
Okay. That explains why there are so many formats, right? They are each a different “formula” for recording with discounting?
In audio, that might mean a way to record only the frequencies in the human hearing range, excluding sounds which are masked because one is louder than the other. That might mean analyzing the ability for people to distinguish between different frequencies or durations and excluding things we can’t notice anyway. And so on. Does that make sense?
Yes. In the audio realm, it makes a lot of sense. I just can’t figure out what it has to do with images. What could I possibly want to exclude visually?
Quite a lot, actually.
Was that a stupid question?
No. As I say, we are generally limited to our range of senses; it is natural to wonder what there might be aside from that (or to assume there is nothing aside from that). For example, Gallileo must be crazy to think that a planet could be so small in the sky; here we are on a planet and it is huge. Clearly what we stand on is the center, because all things revolve around it.
I’m just saying that, if something is a part of what I see in a real-life situation (i.e., I’m looking at a scene of a couple sitting in a park), why would I take away from that in a digital representation of the scene? It would be less authentic, and I would think digital representations (visual ones) would want to bring you as close to the real experience as possible.
Because there are an infinitude of rays of light, bouncing in many ways, outside your range of vision, too small to notice. Let’s hold a moment on the analog/digital thing.
This problem of observing things as they are, not as we perceive them, is called “frame of reference” in physics.
You mean the way we see them is the frame of reference or the way they really are?
The frame of reference defines how we observe them.
We say that a 7’ man is tall because most are shorter. But he is not tall compared to a giraffe.
Okay. Makes sense.
There is no true observation, only relative to a frame of reference.
Okay. I’m with you.
Gravity is a force which pulls us down. Except it isn’t, it just most often appears that way to us.
You mean there is no universal observation?
Observation is defined by object and subject.
Okay. Still with you. Why do you say that gravity isn’t a force that pulls us down. What else is it besides a pull from one object to another?
Because it is a force that draws between any 2 things with mass.
You were referring that the gravity can flow in any direction, not just down?
And between any 2 things with mass.
But we think of it as pulling down since it’s how we experience it?
It’s just that, on Earth, the planet is always the largest thing in the coupling. Which is why the cup sitting on the table next to me doesn’t pull toward me because it’s responding to the greater gravitational pull of the Earth instead of my gravitational pull?
Well, now we’re getting off track, but no, that is due to friction. Both you and the cup do, in fact, pull at each other. Just inperceptibly.
Friction interferes with the pull?
Okay, keep going.
So, back to visual. The main thing that defines a digital format is that any information (any level of detail, any fact at all) can be represented in some way using only numbers.
What other formats are there?
So how does an analog format represent things?
Without discrete counting - continuously, as I said. The thing about numbers is, they are discrete - countable, separate, of distinct identity.
So why is that a good thing in formatting?
You mean digital?
Does that mean we’ve made progress or are you yelling at me?
Progress! Suppose we have a digital format defined, and for the purpose we have in mind, say, a digital image.
What do you mean by “a digital format defined?”
Remember, when we want to record something digitally, we’re keeping in mind the information that is important for the goal, and we are counting it in some specific way.
A line on the tape as it goes by, or a count per minute. Or any other way.
But the point is, the thing that gives meaning to the recording is the format we defined. If you just saw a piece of paper with a lot of numbers on it, you wouldn’t know that it was a count of arrivals per minute, 1 per minute, starting at a given time. The format is that specification. Given that information, now you can interpret the numbers. Remember, this was boiled down from observing the hallway, with all its details, for say an hour. To make 60 numbers. All neat and tidy.
So now, the thing that makes digital good for archival is that once recorded (assuming the media stays good), copying has perfect fidelity. That is because it is quantified, because it is based on discrete values. In analog, when you try to copy, all of the irrelevant details not just in the original recording, but also at the time of copying, are captured in the new copy.
But how is an image based on discrete values / numbers? This number is 5. That number is #78. But what are 5 and 78?
Sure, the simplest format to understand is called bitmap. Imagine you had some graph paper. And you want to represent an image with that paper.
The most natural thing to do is to directly color in the squares, probably.
You lose a bit of information in doing so, because perhaps there are curves that don’t line up with the image. But if we make the squares small, then the difference is small. Small enough, and we don’t care about that difference any more.
Okay. Still following you.
OK, now what color did we color with? Perhaps we only have 32 pencils and we can see it doesn’t match, so we add more pencils to chose from until the difference between the actual color and our chosen pencil is also so small that we don’t care any more. Still with me?
Why do we have a limited number of colors?
As large as we want, but not infinite. The thing is, there are an infinite number of colors. But we can’t actually tell.
Are there infinite colors? Really? How can that be?
Because they are not discrete.
That feels like circular logic.
No, just a moment.
I had to set up the frame of reference and the differences that don’t make a difference to get here, you know? Let me explain through an old paradox called Zeno’s arrow.
Are digital pictures fuzzy / blurry because they don’t have enough squares to make the difference unnoticeable?
No no. Remember, we can make the squares as small as we like.
You mean we can make them as small as we like in a digital format, right? Because we can’t make them as small as we like on graph paper.
Right, that’s just a metaphor. Let’s finish that metaphor, actually, and then go back to talking about why analog is infinite, OK?
So we have graph paper where we lined up the squares and made them as small as we like and chose as many pencils to color with as we like. But a countable number of each. (Both the same strategy, notice, just different applications of the same idea.)
Now let’s replace the pencil colors with numbers, a number per color.
And now we have, left to right, top to bottom, a series of numbers describing the image. You see?
That is a digital format - called a bitmap.
This way of representing images is called a raster image. There is another way of digitally representing called vector. We can save that for another time.
Bitmaps are still be used today because they are a straightforward way to represent, and they were a common early format, they are a sort of lingua franca for raster digital images.
More generally, any computer file you use is some digital format – a big list of numbers whose meaning is given by the combination of the specific numbers and the format with which to interpret them.
Let’s go back to the outstanding questions: 1) why are some digital images fuzzy? 2) How is it possible that there are infinite analog values? 3) what are the pros and cons of digital files for archiving
Is there such a thing as an analog photo?
Sure, the original photography is analog.
Okay, what makes it analog?
The recording of light (analog) onto film (analog) with a reactive medium (analog). Remember, everything is analog unless we go through the step of quantizing and counting.
Yes, but once all those things come together, the photo’s physical reality is set / fixed.
Analog is not more or less real than digital. This whole idea of “real” vs. “digital” is anachronistic.
How many shades of red do you think there are? It would be like saying our senses are more real than the full range of signal. We know we can’t perceive all, and yet we are happy to ignore it mostly.
I started out this whole oddessy thinking that analog meantime “real” / 3-D” and digital meant “2-D replica.”
Try to separate the idea of editing from analog.
You sort of made the leap that there was some relation between “the photo’s physical reality is set / fixed” and digital/analog.
I don’t think that’s a good relation to assume (or infer much from, anyway)
Well, it’s because the three things in the list that were analog (light, medium, etc.) are all in one condition before the picture is made, and then after the image is made it’s all static, right?
That is true whether it is recorded as analog or digital. For example, when you take a film photograph, what is captured is a representation.
But you can make photographs from the film which are quite different. You can use brighter light. You can put the film further from the photo paper so that it is larger. You can put a filter or mask on it. You can do a double exposure so that 2 photos are on the same film.
Okay. I’ve never been sure how that happens, but I know it can be done. Seems like the film would only be good once.
No, film, before being developed, has a contiuum from unexposed to fully exposed. What makes an image is how much each part of the film is exposed, and by what frequencies of light.
So double exposure is taking two different instants and sources of light and recording it on the same spot of film.
Another way to think of it is that a given film will record a picture differently based on how bright it is, or how long the shutter is open.
Is it like carving where you take the medium away to reveal the image? Or sculpting.
Hmm, I would say more like carving a thin medium. You can get a relief, but not a full statue.
It’s possible to use very different techniques in double exposure, too. This one was one quick photo, then another slow photo.
So, if the first photo takes all the black away in a particular spot, then the second photo just doesn’t have any black to work with in that spot?
A lot of the same effects can be achieved with both recording approaches. Actually I’m starting to view photography as a classic art. I think digital photography changes the range of expression, but I think it’s not new or even really changing from its meaning.
Yes, well, I’m trying to understand the science of it right now.
I do think that a lot of people are drawn to library because they really like artifacts. I do think that a lot of people are drawn to library because they really like artifacts.
Anyway, let’s go back to the idea of the infinite variations. I asked how many shades of red you think there are. 10? 100? 1000? 1000000?
Not sure. 1000 sounds crazy but possible.
Why does it sound crazy?
At some point it stops being red and changes into another color.
Ah, no, what about the spaces between?
What do you mean?
What are 2 very close hues or shades of red?
Crimson and blood red?
OK, now those are 2 colors you have names for. Do you think you can distinguish between them?
It’s more like I just accept someone else’s interpretation of the color, you know? They may call it crimson, but I may call it blood red or magenta.
But you agree that what you call it does not change what it is?
Yes. Agreed. I can generally tell the difference between two shades of red.
And the reason that those shades have names is because we can generally tell them apart. Look here.
When it gets REALLY hard to tell close up, we women just say that it’s close enough to be a match. No one’s going to know the difference from a distance.
Aha, so even then there is a difference, right?
And you stop having words for them because they are somehow close enough not to care.
We’re pretty sure it’s not a perfect match, but we don’t think it’s noticeable.
(For guys, that’s about 10 colors.)
Hee-hee! So true!
So now, there are colors that we don’t have names for. We just don’t care.
Yes, we don’t have names for every shade. We usually just say it’s a light shade of lime green, or a dark shade of blue. We stop specifying and start associating with generalities.
There are different models of describing color (RGB, HSV, CMYK, etc.), but there are any number of ways to describe them. And yet there are more we don’t bother describing.
Okay, but how does that make color infinite?
Unrelated to your question, let me take a different approach just a moment.
What is the smallest number larger than 1? We might say 2, but that is a whole number, and 1.1 is definitely larger than 1.
Those are numbers, not colors. It feels like there has to be a limitation to colors because then they become different colors.
Colors are just points on a continuum of frequency.
Okay. That’s helping a bit. Keep going with the points on a continuum thing.
Here is a graphic to make it clearer just how small light is compare to the observable spectrum.
So you’re saying that there are other colors on the rest of the spectrum?
No, it isn’t really about the spectrum being larger than visible. The point is, when you consider light, you don’t think about UV or IR or microwave. They are the same thing, but not visible to your eye. So you don’t generally have a use for it. We call them something different precisely because we can’t see them. Not because they are different.
You see on that diagram, that smaller than visible (a narrow band), there is UV (ultra-violet) and below that is X-ray, and so on?
So, UV and IR have no color?
As we get smaller in the visible colors, we have names, until we don’t. And just as there are spaces outside the range of named colors, there are spaces inside the named color ranges. Those things are still colors, we just don’t have names for them.
Just as 1.001 is larger than 1 but smaller than 1.01. At some point, we stop caring, but they still exist as different values. As small as you think to go, yet there are more. Back to the arrow paradox which is another way to say the same thing. An arrow is in flight from point A to point B. Over time, it must travel, but at any given instant, it is at some place.
Okay. I think I’m just going to have to accept that, just like I accept the massive numbers of stars I hear that there are but that I can’t wrap my mind around.
No, let’s not give up yet. Big numbers are not inconceivable, nor small. What is the highest number? Or the smallest? There is no such thing, right?
At least one more than the highest one I can imagine.
Right, that way of thinking is called telescoping. If you accept infinity, then there is no biggest or smallest. If you can prove there is one greater, no matter how great you go, so then there are infinite numbers.
So take a color, and make a color just a bit redder than that. And make another color just a bit less red, but redder than the first. And so on, more colors between them.
That’s what I’m saying, though. It will eventually cease to be red and turn into a new color like brown or black.
No, remember we started with red and made slightly more red, then slightly less red.
Yes, well if you go slightly less red enough you’ll end up at some shade of orange.
Except we stay redder than the first! We don’t go from 1 to 2 to 1.5 to 0.5. We go from 1 to 2 to 1.5 to 1.75.
Correct on the numbers.
The same with colors.
Show me. Write it out for me. Replacing numbers with color.
Draw a line in red (or imagine it so). Now draw another line, a bit brighter red, to the right of that.
Then a line between those 2, brighter than the first, not as bright as the second. So now you have 3 red lines, a bit brighter as you go to the right.
Draw a 4th line, brighter than the 2rd, not as bright as the 3rd, between them.
But if you go to the left of the original line, you fall out of red, right?
Yes, but we don’t in this case.
So, between two shade of a color, there can exist an infinite number of shades, right?
Yes. We just lose track because we have trouble telling them apart, just as we would have trouble with 1.00000000001 and 1.0000000000001.
Still seems like at some point the shade will get so light the color will disappear completely.
OK, so instead of bright/dark, I say start with red, then a bit more purple, but still red. Then something less purple,, but more red than the first. The same idea. Never going redder than the first or more purple than the 2nd, but always between.
But then there are limits.
The original red and the original purple are the limits.
You can say “no redder than red, no purpler than purple”, but yet there is always something redder than the first.
See now, that’s where I disagree.
See now, that’s where I disagree. Let’s go to the arrow. Suppose there is an arrow and you shoot it at the target, and it will take 10 seconds to reach the target. Let’s say that is 100m away. In half the time, how far has it gone?
Half the distance, supposedly. Half the distance, supposedly, since it’s a constant rate of speed.
Sure, let’s assume that, so 50 meters. In half the time again, how far? Half of the half, I mean, 7.5 seconds.
Why would we assume that, though, when an arrow almost can’t go at a constant rate of speed?
Well you agree it will reach its target? I’m just trying to simplify the discussion so we can focus on the part that matters. Which is how far it goes in some bit of time.
If you start out with the proper combination of force, lack of wind resistance, etc., yes, it will reach its target.
So how far in 7.5 seconds?
It goes 50 yards in 5 seconds, so 75 yards in 7.5.
And half again, 8.75 seconds?
And half again, 9.375 seconds.
I get the math, just not your point.
And half again, 9.6875, and half again, 9.84375. The point is that the smaller we measure in time, the smaller it passes. We agree there is a full distance to travel, but we can measure that travel in infinitely small units of time and distance.
Correct. We can keep halving indefinitely.
At any instant, it has travelled, but it is in a specific place. So how many places is it in over the full 10 seconds?
Yes. It is always moving forward, always changing position, but always in an infinite number of places. All within 100 yards, 10 seconds. All depending how closely we measure.
Okay, keep going.
So a thing which is half between red and purple (call it c1) then half between red and c1 (call it c1.5) then half between that c1 and c1.5 (call it c1.25) and so on. Always between red and purple, but infinitely many.
So, infinity has boundaries? That’s where I’m getting stuck.
There can be infinity within boundaries, yes.
Maybe I’ve always imagined infinity incorrectly?
There isn’t a single infinity. It is easier to explain if there is no specific range.
To the contrary, it seems like infinity is easier to understand between boundaries than without them.
But infinity is about counting, not about range.
That makes sense now that you say it, but I’ve definitely held the impression that infinity is an extreme on a spectrum.
It’s easy to see something closer, farther, farther still, and so imagine something farther yet. Infinity is a hard concept for sure. But don’t feel bad, I’m still not sure I understand it, as are lots of people. And most of the world didn’t understand zero for most of history. How can you count a thing that isn’t there? Why would you? And yet, it’s useful.
Leave zero alone for now. I’m still trying to readjust my thinking on infinity.
Not for trading (which is how counting started) but for lots of things. And we ignore infinity because our senses don’t notice the difference between one frequency and one a tiny bit higher, or a color a tiny bit redder. That was meant as an assurance - there are things we don’t yet understand, but things we didn’t and now do.
You can expect to understand this one, too, with attention.
So, back to analog photos vs. digital photos.
Okay. Digital formats are a recent thing in the scope of library and history. Before digital formats, everything was analog. For example, an audio recording on a tape cassette. Each time you listened to the tape, it would degrade the tape.
The quality of the sound depends on both the recording equipment and the playback equipment. If you made a copy, that too would degrade the copy. A copy of a copy of a copy is bad indeed.
You remember the handouts teachers would give of their last copy of the homework which itself was a 10th generation copy?
An unavoidable consequence of analog - of degradation in the copying process. There are a limited number of artifacts, with a limited shelf-life, and copying to renew the life degrades the quality.
Because of that, rare works have limited audience. Only in controlled environments, perhaps through glass, or with gloves, or masks, or in cleanrooms. Or with robot manipulation of the pages to avoid tearing, and so on.
A promise of digital is perfect copying, giving infinite life for the artifact. Note that digital has its own problems - file formats, plus hardware and software to interpret them.
Despite the fact that digital artifacts and archiving have become more common, still the standardization has not made caring for a digital work as clearly understood as caring for an analog one. We know what makes a book live longer. Few know what makes a digital artifact live longer.
Wow, that’s pretty profound. So, why haven’t they gotten standardization down in digital yet? Is it a money thing?
It is mostly an issue of interdependence. You can directly observe a book. You need only know the language and how to read.
You can not directly observe a book in digital format. You need machinery to help that.
So the trouble with digital formats is that they must have interpretive tools. Between the object and the user, that is.
Someone can feasibly learn to read and learn a language, but one can not feasible construct a computer and make the software. Or rather, the priesthood who know how to make computers and software is too small. It is too difficult to gain the skill.
A fabrication plant to make a computer chip needed for the simplest computer costs a billion dollars, roughly.
Seems like all the more reason not to go digital.
You don’t have to, there are choices. But the choice to create digital or not is mostly in the creator’s hand, not in the library’s hand. The library is given an artifact to preserve. You don’t make them (or at least, not all of them).
Yes. What choices?
It is very unlikely that we will stop making computers. The main problem is that specific hardware and software are used to interpret the files. Translating an artifact from one file format to another file format is difficult to do well. Some would say impossible without loss. You can make perfect copies, but not perfect translations.
Right. So why can’t we make standards specifying the hardware and software.
OK, so the choices are:
- the creators all make things in a format you wish (per standards)
- or you translate all works as you receive them to your standard (accepting the likely loss in translation)
- or you keep works in their native formats and seek to maintain the software and hardware needed to interpret them.
Is technology improving so much that it is worth the instability created by not just picking a format / hardware / software and sticking to it (for at least a set amount of time)?
Yes. Let’s go back to the bitmap example.
Why aren’t we doing Choice #3 for at least a little while?
In my opinion, that is the correct choice.
When there are massive gains in technology, then we can all make perfect copies from the original in the new, massively improved format.
In 1998, when images on the web were new, and computers were more limited, a 30kb image was considered large.
That was a gif or jpg, which would translate to something like a 1mb bitmap.
If gif and jpg didn’t exist, images would have bee too large to share on the web. And so there would be basically no artifacts. The constraints of technology form the corpus of artifacts. That was true before digital.
There would still be the originals.
But people created images because they could share on the web.
Only technology-created artifacts.
Papyrus is a tech-created artifact. Just sufficiently old to not seem so. So now, as I said, 30kb gif. At the time, that was large.
By today’s standards, a 30kb gif is tiny. So people make larger ones. You can not make the 30kb gifs magically better.
You mean the original ones from 1998?
Yes. Here is a daguerrotype. At the time, it was a pinnacle. Now, it is pretty poor. As was the 30kb gif. Plus: better images way faster!
A negative of fast progress is that perspective on quality degrades faster as well.
Here is a 30kb gif as an example. And here is the current pinnacle. Note, that image is a single image. It is only presented as a viewer there because it is too large for a browser and the internet, even still. (Click on a red dot.)
Now, that is 17 gigapixels, or 68 GB, or nearly 3,000,000 times larger than the pinnacle 14 years ago. Did you click the dot and see the imperceptibly small things captured in clear detail?
I wouldn’t say clear detail. I could see people, but they were fuzzy.
Compared to the original dot.
Sure, I agree. Good images for the scale> difference.
Imagine a book had been written 14 years ago, and it were 200 pages. Today, if books got bigger at the same rate, that would be 600,000,000 pages.
I don’t think that analogy works. The book at 200 pages was complete. Nothing left to see if you focus in. A photo is different.
Sure, but I’m just trying to give you a sense of the absurd scale of change in digital. Benefit, but cost.
I’m clear on that. And the cost is the instability.
There are 2 sources to instability: - software that must run on a specific machine - software that is closed (that is, impossible to create freely)
Why does software have to run on a specific machine?
Sometimes because the vendors make it so.
Because the software tells specific parts what to do?
Sometimes because of that, yes. And sometimes because there was no interest in making it otherwise.
But now that we’re aware of the problems caused by the specificity, why isn’t there more transferable software that can be used on many machines.
Because vendors have an interest in it being otherwise. Follow the money, mostly.
I wouldn’t ascribe malice to all closed software - it’s often that nobody is interested in, say, opening a Word doc in Linux.
Yeah, but we’re well past that now.
It has to be common knowledge that transferrability is desirable.
Yes, to some extent. For libraries, the problem is worse than for normal concerns. Most people want to open the doc somebody sent them, or that they saved last year.
For libraries, you want to be able to open the WordPerfect doc that Christopher Hitchens used to write a book in the late 80s. I’m making up an example, but even so.
I think a useful system would be to have a digital formats endangered list: “Works in these formats are at risk. We estimate this many years before extinction.”
Why can’t we stop making them extinct though? They don’t just become extinct, we make them that way.
Sure, and again, there are 2 choices - provide high-fidelity translation to another format (but noting perfect transfer is near impossible) or maintain the life of the format.
Why can’t we just make new originals in the new digital formats? For example, you have a digital image of a painting in a format from 1997. In 2015 a new format comes out that’s much better. Why not go back to the original painting or analog photo and make a new perfect copy in the new 2015 format?
You can make perfect copies of the same format, but making a perfect translation is very difficult. Being sure it’s perfect is very difficult.
I know. Don’t just translate from the 1997 version to the 2015 on a computer. Actually create a new image going from analog to digital in the 2015 format.
Ah, you’re assuming you have the analog original. But 1) the analog has degraded since then; 2) the digital might be the only thing you have.
I think the fluctuations in digital have shown someone better hold on to the analog.
Many works are digital-native now. And what of your phone camera photos?
Seriously, though, I have meant to make an actual hard copy of all of them in case electricity becomes a thing of the past. I want to cry every time I see a 16-year-old who can’t write cursive and doesn’t see the point. They write in a manner that imitates the characters they see on screen. The personality of handwriting is being lost.
Some time ago I saw a pen that recorded what you drew on paper and then could create a digital doc from the recording.