Want to discuss technology?

After the discussion over here, a number of us have purchased The Technological Society by Jacques Ellul and have started reading it. Here is our reading schedule, for anyone who is interested in reading along with us:

  • Week of January 20 - pages 1-73
  • Week of January 27 - pages 74-146
  • Week of February 3 - pages 147-219
  • Week of February 10 - pages 220-292
  • Week of February 17 - pages 293-365
  • Week of February 24 - Done

And if you’re reading along (@projanen, @jtbayly, @connell), please post your thoughts here!

2 Likes

Regarding technological advance, I found this article fascinating about the Luddites:

Also, the conversation about it on Hacker News was instructive.

I post this here because Lucas and I were talking by phone about the “inevitability” of advance on various fronts, including the areas of surveillance and automation taking away (even) more jobs.

It turns out the Luddites were not as backwards and irrational as I thought.

Well, I think I’ve gotten to page 160 or thereabouts in Ellul. He’s obviously brilliant, and you’ll learn a lot of you’re willing to slog through it.

I’ve decided that a better bang for your buck is Neal Postman’s Technopoly. Postman has clearly read Ellul and is downriver of the Frenchman, so you can tell when he’s building on Ellul. But Postman’s book is much shorter and much denser than Ellul’s.

I also took the time to listen to CS Lewis’ extended essay titled, “The Abolition of Man”. I think it’s crucial to read as the backdrop for these discussions. While Ellul was a Christian, Postman certainly was not, and he comes at the topic with an enlightenment mentality. Think old-school liberal. So Lewis provides helpful analysis that Postman leaves out.

I do plan to continue with Ellul, but not until I finish Postman.

Further thoughts are welcome. This is a difficult topic! :flushed:

I retitled this topic because we really have moved away from Ellul. At this point, I’m not convinced I’m going to slog through. After further discussion with @jtbayly and another friend, I decided to pick up a print copy of No Sense of Place. It was billed as “the best thing to read on this topic.” I’m less than 20 pages in, but I am hopeful.

However, it has been made abundantly clear to me in this exercise that I can consume books by audio at much faster pace than anything I have to read. I don’t have a long commute, but even then, it’s pretty easy to find an hour a day to listen while I’m doing something else. And some days I can listen much more than that. So I can get through a 10 hour book in just a couple weeks by simply “filling in the cracks.”

So I’m also about half way through listening to Habeas Data, and it has been more helpful than I thought it would be. I was originally looking for an audiobook version of Data and Goliath that I could listen to through Hoopla, but it was not available. Habeas Data is, and so here we are.

Here’s what I wrote last night in response to listening to Habeas Data:

I think it is much more valuable to think about technology as magic than as technology. Why? Because it personalizes the technology. When you think of a wizard perfoming some super-human feat, like seeing something that is taking place thousands of miles away, you assume that it is a power held by that person. It’s somehow intrinsic to that person. And if you imagined a society in which such wizards were common, it would be natural to assume that laws and customs would grow up around them that would govern their use of those super-human abilities.

When you think of impersonal technology, however, it’s hard to get your mind around how it should be governed. After all, it’s simply a bare fact that that camera and plane working together can surveil hundreds of square miles in a short amount of time down to feet and inches of ground. It exists, and there’s simply nothing you can do about it.

But when you remember, as we always must, that people are responsible for taking the pictures, processing them, using them and storing them, then we can talk about governing those people. They may have super-human abilities, but those abilities, like our “regular” ones, can and should be governed. And that governing can happen both through laws and social customs.

Another important reason to think like this is because this is not a government vs. non-government question. The technology used to perform many of these “magical” powers is so cheap that we can all, to some degree or another, have them. I think it’s right that the government shouldn’t be allowed to use thermal imaging to look inside my house without a warrant. But I think it should also be illegal for anyone else to do it.

In contrast, I hardly ever listen to audio because it is painfully slow – reading is 20 times faster than listening. Plus it’s much easier to go back to a section of text and re-read it carefully than to rewind audio. It’s unfortunate that so much material here is put out by podcast rather than text because I don’t have the time or patience to listen to it.

I’m listening to Habeas Data at 1.25 speed, and I’ve thought about going higher.

That said, you are right about going faster by reading, but all my listening happens when I wouldn’t otherwise be able to read. So it still seems like a win.

1 Like

It’s a perfect example of a change in communication technique/technology that Postman discusses a lot. The medium change has major ramifications, some of which are expected and others unintended and unexpected. Hard to say that it’s good or bad as a whole. More like, adopt it hesitantly, watch for negatives.

Another thing that Postman talks a lot about in Technopoly is content filters. He explains why they are necessary, and he is quite convincing. I’ve been seeing the need for filters as well as the debate over who controls them in a whole different light since reading his book. One of the things he says is that the family that cannot control the information environment of its children can barely lay claim to the title of “family.”

At first I thought about this entirely negatively like a filter. In other words, a family must be able to say “no” to their children being exposed to certain “information,” such as pornography, as well as saying “no” to their children being taught certain “information” such as that homosexuality is just as natural and good as heterosexuality.

But now I am also thinking about this in terms of the positive right to inform their children of certain things that our tech overlords are declaring anathema. This still comes comes down to filters, but my latest realization is that the battle we are seeing play out is the battle for control of the filters, and which filters or types of filters should be implemented at which levels in the technological stack.

After flooding us with information (Postman calls this “information glut”), the tech firms are now offering to “fix” the problem by filtering the information. Sure, they’ll offer the ability to block somebody, but they are also taking on the role of cultural censors. Today, the latest addition to the black-list is any sort of argument against vaccination.

Our tech Lords are insisting that they will save us from ourselves and our ignorance. This sort of claim has always been a bad sign, and it is no less so today.

3 Likes

I reread The Abolition of Man a couple of years ago. Lewis’s foresight is incredible. It’s like he saw the early 21st century from his position at a time when the Third Reich was still a going concern.

2 Likes

I want to discuss technology, but I don’t want to do the work of reading this book.

That’s probably okay, so long as you give extra respect or credibility to the voices of those who have read them.

1 Like

Feel free to contribute without reading any of these mentioned books, but boy would Lucas and I both recommend Technopoly. We both listened to it. My wife is loving listening to it, too.

4 Likes

To borrow a phrase, man was not made for tech, but tech was made for man. Yet so many technophiles and those who are in charge end up reducing man to machine so that technology can be declared a success.

Here’s one example. As is the case for all professions, it is difficult for outsiders to evaluate those inside, or in the case of my particular profession, for those outside the field to evaluate scientists inside the field. That’s a big problem for the managerial mindset running the U.S. right now. But big data and algorithms to the rescue! Since it is found that better scientists tend to publish more articles in more prestigious journals and gain more citations, a system gets set up to evaluate scientists according to metrics related to publications and citations. This is applauded by technophiles since it makes use of data and algorithms that promise “quantitativeness” and “objectivity” and also by those in charge since they can now “manage” without needing to actually knowing anything about those who are evaluated. And not surprisingly, more scientific articles are published and cited than ever before. But amid the celebration, how many people have noticed that what we want is better science but what we’ve gotten is more publications? The scary thing is, people who ought to know better have begun viewing publication metrics as the same thing as genuine scientific productivity. And hearkening to the global warming debate on the other thread, it’s going to be hard for anyone with a non-mainstream view to survive in this sort of evaluation environment.

Here’s another example. Some years ago it was announced that AI could grade writing assignments with the same reliability as humans. Technophiles loved this prospect, of course, as well as those in charge who hoped to cut labor costs by substituting AI for instructors. But when I drilled down, I discovered that the human graders were only given two minutes per assignment, essentially making them nothing more than simple-minded robots, and that the AI would return the same grade even when all words in the assignment were randomly reordered. What had happened was that during training the AI found that better writers tend to use longer and rarer words and rated the assignments according to that metric. Algorithms have improved since, but it is still the case that there is no prospect that an AI will ever understand content or be able judge the effectiveness of arguments. Instead it’s all based on certain characteristics, such as complexity of grammar and vocabulary, and what’s begun happening is that students are being trained to write according to these metrics. What I wanted were students who could effectively write and communicate, but what I’ve gotten are students who use overly complex grammar and vocabulary but lack the ability to make a coherent and logical arguments. The more tech is used to scale up and reduce costs in education, the more dumbing down we get in order to accomodate the limitations of tech.

Tech was suppose to be the servant, but it ends up being the master.

1 Like

This is a well known problem known as Goodhart’s Law:

The problems of subjectivity, inconsistency, opacity that come from having humans as judges are intuitively solved by using a computer, since it follows rules rigidly and consistently. But the thing is that the more AI advances, the more those exact same problems arise again. And now, instead of having a judge that you can hold accountable for his judgments, you have a judge following a computer system’s orders, installed by a bureaucratic order, programmed by somebody who literally cannot tell you what the rules are that the computer is following. All he can tell you is what percentage of previous cases the computer would get “right” when tested. “Right” is determined by having a human judge the case.

It is precisely for this reason that it should be expected that every single AI system has unknown biases programmed into it. Selection bias and judgment bias are essentially impossible to remove. That’s the whole problem that we’re attempting to solve in the first place. Garbage in, garbage out.

Meanwhile, the programmer might discover one of these problems and attempt to fix it, but he will have no idea whether the fixes he is implementing are undoing the fixes to other problems that he previously fixed. And in fact, the more you attempt to fix particular problems, the worse the AI performs on the general problem.

AI cannot fix the problem of human subjectivity. It can only mask it by removing it a few layers away from the original subjective judgments.

2 Likes

As soon a second you’ve established hard rules about something, you’ve designed a game. And once you have a game, someone will start gaming it.

I have seen it over and over in the corporate world in everything from performance evaluations (“Your customer service skills are a 4 out of 5”) to bonus awards, stock option awards, and so on.

There’s just no substitute for leadership.

1 Like

Good point. Never thought about it that way.

Also, you’ve ruined every reward I’ve ever received :grinning: Just kidding.

Don’t be down on yourself. I’m sure you earned that T-Ball participation trophy! :wink:

1 Like

Yes, Goodhart’s Law, substitution for leadership, etc. But what I find most interesting is that despite the obvious failures, people continue to forge ahead, such as relabeling the measure as what their target was all along. And why is it regarded as acceptable to substitute an oversimplified management system for genuine leadership? My point is that these attitudes reflect the mindset of the current world in which humans are viewed as machines and therefore treated as if simple algorithms can convert the input into the desired output.

When it comes to bias, it seems that worry about negative impacts on favored underrepresented groups is about the only thing that is able to dislodge AI hegemony.

Given the volumes of cash in online marketing, and how lousy online ads are, I think the singularity may be a bit farther off than the most excited boosters believe. :slightly_smiling_face:

Like you, I am totally baffled by this.