Naïve Technosolutionism

Jack has a good piece from last month squinting at some of the common arguments for various uses of the blockchain.

The Crypto Absolutism Fallacy, or “CrAbs Fallacy”, applies when a proponent of a system suggests that some condition will be improved, without recognizing that that only happens if the system exists in isolation and/or if every actor is only using that system; if actors can use that system and other similar systems, then the condition actually stays the same or worsens.

Along with this comes a fallacy of 'naïve trust' where a system is built or talked of in such a way that it only works on 'the assumption that all actors are acting in good faith'.

Many of the examples tapped into something else for me, which I realise has become an unconscious heuristic for me when evaluating technology stories.

There's a particular technosolutionist mindset that's emblematic of Silicon Valley (but which is neither solely of that place nor fully defines that place). It generally asserts that whatever piece of technology is being touted today (blockchain, self-driving cars, undermoderated social networking) is going to solve [insert major problem of your choice], and it is going to solve it on its own merits, which are inherent to the technology, to the extent that any critical consideration of how we implement and use that technology is secondary.

This is exemplified in Facebook's old motto 'move fast and break things'. The technology is going to change the world for the better and bring about a huge amount of net good; therefore, any 'bad outcomes' produced along the way are seen as justifiable and incidental. And besides -- once we get to the 'point of the good', we'll be able to optimise away all of the 'bad' we've produced, and it'll all balance out, okay??

(This usually comes down to capitalism and growth in some form, even if in some cases this rationale remains honest in the mind of the creators -- if you believe what you're doing will change the world positively, but you need the resources to get it there, you can be willing to tolerate a whole host of negative externalities to ensure that you get them. Especially if you're mug enough to believe you can just 'fix them all later'.)

What usually runs through this, implicitly or explicitly, is that the technology supposedly negates the need for all that finnicky 'politics'. Which always make me laugh (bitterly), ever since I started conceiving of politics as 'the thing we do as an ongoing negotiation to try to stop ourselves killing each other' (which is not necessarily a 1:1 map onto e.g. national politics). Which brings us back to Jack's 'naïve trust' and CrAbs fallacies -- the idea that you could do away with politics (even if you wanted to) tends to be laughably naïve.

In practice, though, this underpins a lot of major contemporary 'technology' corporations, where the innovation isn't truly 'technological' in an 'interface with our physical environment' LeGuinian sense, but in laundering away the 'politics'... in the form of bypassing, say, labour regulations. (See: Tesla, Uber, many others.)