Your Phone Is Your Private Space

This is the best article I’ve found about about why Apple’s delayed plan to scan iPhones for Child Sexual Abuse Material (CSAM) is wrong-headed. It is part of the reason that I have concerns about Covenant Eyes. At least CE is voluntary and the government isn’t involved, but especially in its more recent form it normalizes being under continuous surveillance.


Like-minded critics of the technology have gone so far as to post open-source demonstrations of how bad actors could trick innocents into possessing innocuous images that have been manipulated to match the numeric codes, known as hashes, that nonprofit groups’ databases use to identify known child-sexual-abuse material. Using such techniques, a sophisticated foreign-intelligence service (or perhaps a teenage troll) could theoretically cause trouble for American iPhone users—which is to say, a substantial percentage of both average citizens and business, government, and academic leaders.

I was happy to see this point made, as I hadn’t read anybody making this point yet.

Technology has come scary far, scary fast when it comes to creating the means to extract and handle data. There’s money to be made in all of that. But I don’t think it’s come nearly as far in the area of prioritizing the forensic tools necessary to identify fraud and clear innocent people who have been digitally framed for something they didn’t do.

Think about the legal battles that still go on about things like speed cameras. Should video recordings taken by 3rd party, non-law enforcement agencies, be sufficient evidence in a court of law to charge me with a crime? Set aside the legal implications between capturing a license plate vs. witnessing a human being commit the crime, and just consider even the data implication. Who manages the data? Who keeps record of who accesses the data? What assurances do I have that the data hasn’t been doctored? Without these kinds of forensic considerations and assurances, we should seriously question whether or not digital data should even be deemed valid evidence in a court of law.

Having worked in telecom and managed network services for most of my adult life, I have observed that many (if not most) companies – even ones that are in supposedly regulated industries – are dangerously ignorant when it comes to data security. There are so many facets to it that people just don’t think about. It’s one thing to protect data from being leaked out of your databanks. It’s another thing to track and account for how that data ended up coming into the databank to begin with. There aren’t nearly enough layers considered.


If only we could get as far as protecting data from being leaked out of databanks! Twice (!) in the past seven years, databases with a comprehensive set of my personal identifiable information have been hacked. The first did not concern me so much since it was presumably carried out by a foreign intelligence service and I am not in line of work where that would matter, but the second was carried out by cybercriminals who then published my information (and that of my wife and children!) on the darknet when my employer presumably rebuffed their extortion attempt.

The reality is that we are going to lose all privacy when the government wants to collect detailed information on every person, wants that information to be centralized in digital form, and wants that information to be easily accessible. Neither the government itself nor the private organizations that are required to hold that information are able to maintain sufficient security, and there’s no hope that they will do so until we expect software engineers to meet the same safety standards as bridge engineers. But that will never happen because there is too much shoddy legacy coding and it would cost more than the government and private organizations want to pay for to start over from scratch and put do what it takes to get better security. So we’ll continue to patch fundamentally insecure systems and offer to pay for the first twelve months of credit monitoring when the data are hacked. And with hacking occurring at increasing frequency, eventually everyone’s private data will be publicly known.


And who interprets the data? If it is an algorithm interpreting the data, how can we guarantee it is correct? If it is a machine learning algorithm, in all likelihood we can’t ever guarantee it is correct.

Take this story for example:

And it’s yet another thing to guarantee that nobody has tampered with the data in your databank.

There is significant doubt in my mind that this is possible. There are not materials and physics at the core of computer science, but computers. Computers that have software at their core. There is so much other software between a developer and the end result of his code that it’s not clear to anyone how you can ever know whether your bug-free code (assuming you can guarantee that) is doing what it actually says. The compiler, the firmware, or any number of other things may have compromised your perfect code.


Proving that code is big-free is actually the subject of a substantial amount of literature in Computer Science. Much of it comes to definition of terms, but software is very commonly working in the space of wicked problems, where a solution isn’t known until after it is produced. Civil engineering usually works in much more defined spaces. All that to say, the requirements or specifications of software are very commonly buggy themselves, so proving that your software meets the spec can only get you so far.


The simple fact about big tech and data is that if they can track it, they will track it. Privacy laws and privacy policies should be understood to be completely worthless besides the occasional $12 payout for participation in a class action lawsuit. There are already globalist international networks of intelligence agencies who spy at scale on the world, and until Edward Snowden blew the whistle, they denied it even existed. So any idea that we should trust these people to protect data and privacy is a pipe dream.

None of the solutions are simple, but we can know the right trajectory: FOSS software, self-hosting, simplification, and keeping the right things as “analog”.


The catch is that all of those things are difficult. Even simplifying.

1 Like

True, but right now, people are not even doing the low hanging fruit. Most people are still just using Google, Google chrome, Gmail, Whatsapp, Fb Messenger, Windows, and so on. By far the biggest problem is lack of motivation.

All of this is true, which is why I said it would be necessary to start over from scratch. It’s not impossible, any more than it is to build airplanes and develop a commercial air travel system with 1 fatal accident per 10 billion miles flown. Our problem is that we have metaphorically rebuilt the physical and social infrastructure of our nation on a foundation of software developed by an industry that lives by the motto of “move fast and break things”. The current deplorable state of software would be tolerable if we still had air-gapped computers and control systems, but now that everything is networked together, the impacts of failures are immensely multiplied.

A lot of the computerized world still runs on COBOL, because the new stuff (and the new programmers) aren’t up to snuff.

Bank transactions, airline reservations, supply chain logistics, …the important stuff.

Fortunately still today, when a modern system screws up a grocery order or your corporate/gov’t paycheck, there are still old farts who can find a workaround under the hood sometimes quicker than the youngsters/offshoremen can fix a front end, API, etc.

I predict Indian universities will start churning out COBOL coders to fix the looming extreme shortage. They’re not unwilling to teach what’s needed. I do not look forward to the day when the most important COBOL systems get decommissioned, replaced by our modern throw-away/never-ending-never-finished code.

HR systems (scheduling, timecards, attendance) are currently being replaced broadly by modern systems at big companies, and it’s going well. But showing up for work when the store computer doesn’t have you scheduled doesn’t mean you can’t start stocking shelves while the manager corrects the oversight, maybe not correcting it till the morning or weekend in the system of record. Imagine what would happen if your flight reservation confirmation number didn’t work when you showed up for a flight.


If COBOL were better, it would still be getting deployed. The ability of modern languages to loosely couple architecture and concerns is far better than COBOL’s.

COBOL was the first widely-adopted business programming language, so the companies and business domains that had the most to gain from computerization adopted it when it was young: banks, government agencies, insurance companies, credit card companies, airlines, logistics companies, etc. That’s why these seemingly important institutions depend on COBOL, not because it’s more reliable per se, but because they built the core functions of their businesses around it and that’s always hard to change. Companies that grew up 20 years ago did the same thing with Java.

These COBOL programs do have the reliability advantage of having run for decades, which isn’t nothing, because there is no testing quite like running in production.

But its persistence in many of these domains is more a function of its inflexibility (see also tightly-coupled architecture) than its inherent reliability or inherent worth. Note that companies that adopted COBOL systems in non-core functions like accounting have largely replaced COBOL with more modern tech. It’s only companies that are stuck running poorly-factored, poorly-documented COBOL at the very core of their businesses that are still using it.

I was almost laughed out of the room at a large credit card company for suggesting that my team ask the core cardholder approval team to make an entirely sensible change to the COBOL that drove approvals.

I seem to recollect that “the computers went down” was an entirely normal occurrence that caused delays all the time at airports in the 1980s.

If iOS had been written in COBOL using the techniques of the 1960s, you still wouldn’t have copy and paste.


I agree, but I rarely see modularity done right. I’ve tried myself and failed, and I theoretically know how to do it well. It’s rare for a modern system to be developed with enough attention, resources, and patience. Maybe it’s always been rare.

COBOL. My old enemy.

I took one semester in college with intent to get into programming. First class was in Java. Loved it. Second class was in COBOL.

About a week into COBOL, I realized that programming wasn’t as exciting and fun as I first imagined. I was fortunately able to nope on out of that semester early enough that I didn’t have to pay tuition.

Yeah, modularity is another wicked problem. You don’t know which business rules are going to be changed ahead of time. It’s easy to put certain business rules into a position of extreme flexibility (how many days till an account goes in arrears? just put the value in the db), but putting all the business rules into a state of extreme flexibility basically leaves you with a blank IDE window. There have to be some assumptions baked into the core of your software. Those will always be risky and difficult to change (though things like unit tests and automated regression tests can help a lot).

Keeping programs small, focused and with well-defined interfaces helps an awful lot in my experience.

You may have already seen this:


No, thanks for sharing.

The example of the travel agent is a good one. They can find and build itineraries in their efficient green screen programs that I couldn’t get on websites or through the airlines. Last time I went to Europe, I used a travel agent to get a special routing.

1 Like

That was an interesting article. Thanks for posting it.

One thing that strikes me about the kinds of businesses running COBOL still is that they exist in industries that have enormous barriers to entry, and in many cases, enormous governmental support. Both airlines and banks have had Noahic flood events in the last 20 years that could have wiped out billions of lines of active COBOL but didn’t due to the actions of the government.

Retail, on the other hand, has had much more active turnover. It’s strange to think of sunsetting COBOL when a company sinks beneath the waves, but I suspect that between Sears, K-Mart, Montgomery-Ward and all the department store consolidation, that lots of mainframes have headed to the knackers in the last 30 years. I suspect that Target and Walmart may run some COBOL, but I doubt Amazon has any.

Andreesen’s “Why Software is Eating the World” is 10 years old now, and I think he’s been proven right. The retailer who acts the most like a software company seems to be winning, anyway.

Thank you for this comment, I agree wholeheartly. Amazon, Google, Facebook, Microsoft, Netflix, Apple probably run zero COBOL. It’s a mainframe thing, and they are tied to IBM which is a shadow of its former self. The article sounds like a “Nokia is still the number one seller of cellphones in the world” article written in 2008. The pride oozing from it makes those applications a ripe opportunity of a newcomer.

Yes, those COBOL mainframe programs are fast and efficient and can do millions of transactions per second, but the device you are reading this on can do this too, even in Java, and even with all the inefficiencies accumulated in the past 40 years. Plus most modern systems can do it in realtime while most of these COBOL systems need nightly batch runs.

Google does batch runs too, but the flexibility and sheer scale of Google is way more impressive. They did learn from those old COBOL systems.

1 Like

This video makes the bold claim that I agree with that software is getting worse and that all of the advances in technology are due to improvements in hardware despite software getting worse and the skills required to make good software getting permanently lost to history.

I think the future health of the tech industry will come down to getting disentangled from it. Get more analog technology and then focus on having simple and reliable software that you control where you need it. We need to have software simple enough to compile ourselves and that we can understand the source code. Make it follow the unix philosophy of doing one thing well and using scripts and interactions to do more complicated things. As long as we are depending on code that is too complicated for one person to understand, we have no hope of protection against tyrannical use of technology.


Thanks for sharing that video - I look forward to watching it. I suspect I’d agree with you about software getting worse.