Technology in the Digital Age

26 minute read

Published:

A couple of weeks ago, I was in a book club where we read Ryan Kemp’s What We Are in the Light. It’s a collection of 16 essays which can mostly be read independently of each other. They are thoughtful, genuine, and challenging. One of them is about technology (another about AI), namely the internet, and how, in Kemp’s estimation, it makes life worse. I didn’t think this was a controversial position but my peers in the book club bristled and closed their minds to it, saying that technology isn’t all bad but it is impractical to cut the internet and other related technologies from our lives without drastic negative effects. I don’t necessarily disagree with the latter two points: there are benefits to technology and the way most societies are structured now, it is difficult to opt out without drastic inconveniences. However, there are inconveniences and then there are harmful effects. I want to at least spell out the issues with technology (as I see it) so that I must acknowledge its drawbacks. Obviously, this will be a different post than my usual.

In Jacques Ellul’s The Technological Society, he writes: “No human activity is possible except as it is mediated and censored by the technical medium.” Now, Ellul’s definition of Technique and Technology is not quite how we mean it in typical American parlance. He doesn’t just mean Boeing 747’s, iPhones (with their costly annual upgrades), or TikTok but rather the “totality of methods rationally arrived at and having absolute efficiency” in all fields, including economics, politics, and human relations. This is a very broad definition and I don’t wish to be this broad. I want to stick to the “technology” of the Information Age (or perhaps, Digital Age). But it’s important to keep in mind that many kinds of technology were/are developed and used for the sake of efficiency. It’s also important to keep in mind the use, the action that follows from the object. Afterall, the word comes from the Greek τἑχνη (tékhnē) which means “art, skill, craft.” These are performed and it’s interesting to think about how, when we’re given a piece of technology, it shapes how we behave and perform, and how so much of our lives are mediated by the technical medium.

Think about your work and leisure and how much of it uses your phone, computer, wireless technology that interacts with your car, your public transit, your purchases. Think about how our lives and behavior get mediated and changed by these technologies. One example I see a lot on my subway commutes is how AirPods can be used to tune out the world which leads to noticeable behavior in public spaces. Ironically, when a person tunes out the world, they often seem to think the world tunes them out as well. I have seen many people, on a crowded train, pull up their banking app in full view of several people. I once sat next to someone who got out their laptop and I quickly learned who their employer is, that they were trying to get FIFA World Cup tickets from a last-minute lottery, they read Chinese, and were considering apartments in Long Island City or Brooklyn. I wasn’t trying to learn all this but they did open a 14 inch screen right next to me on a crowded train. If I turned my head away, I would have been staring right at someone else’s backside.

The Technological Society

Before I get to the specifics of my own thoughts, I think it’s worth outlining some of Ellul’s main points.

  • The Autonomy of Technique: Ellul argues that technology has become autonomous—it grows and dictates its own development, operating independently of human control or moral considerations (yes, I use em dashes, not just ChatGPT). This is starkly represented in how I can call an agentic AI model to write the code for developing another AI model. And of course, automation has been around for a long time in areas like manufacturing. But many things are structural and hard to revert, such as the ubiquity of smartphones. As time passes, reliance on smartphones increases.
  • Dehumanization: The focus on efficiency transforms human labor and reduces life to measurable, optimized data points, causing a decline in individual autonomy. Sometimes, I am still shocked by how much a company can deduce about our behavior from seemingly innocuous data. For example, Amtrak can track my page visits and if I visit again, the displayed ticket prices are often higher. Similarly, it’s known that Instacart and other online shopping vendors will display different prices to customers who are on the same IP address, based on how much they predict the customers are willing to pay.
  • The Myth of Progress: Humans willingly embrace this system because they are sold the idea that technical efficiency equals progress, happiness, and material comfort. It’s great for capitalism since it means there is a stock of consumers.

(Electro)Negativity

I’ll now get into the challenges associated with the internet, smart phones, and the Digital Age in general. Some of these challenges existed before but the scale and scope of the challenges has changed. For example, the scope widening to include children.

  • Social media impacts self-image, contains less-than-honest presentation of one’s self, and leads to degradation of real life social skills. What used to be platforms mainly to keep up with friends are now inundated with ads and posts by people I have never actually met. There is evidential correlation between social media and depression and self-harm, including for children and teens. Meta recently lost some lawsuits related to the addictive nature of their platform and for endangering children. I don’t know anyone who doesn’t find LinkedIn rather cringey, with lots of AI generated content. And yet, many people feel forced to use it or similar platforms in order to progress their careers.
  • Online content creation sometimes leads content creators to behave obnoxiously, recklessly, or even destructively in private and public spaces.
  • Anonymity allows for easier attack of others online and in general, there are many cyberbullying tactics that didn’t exist prior to the internet.
  • A greater ability to spread misinformation and create outrage. A lot of content is meant to “engage” viewers by stirring up negative emotion.
  • There is monetization at every level, such as for dating apps with their paid tiers. It’s clear the companies are not interested in matching people permanently as that means losing customers. It’s almost like dating is a video game where you the paid tiers give you power ups. Except, real relationships are not games.
  • Being able to pay for things quickly or even in installments with apps like Klarna reduces friction for consumer spending but also increases debt as people spend irresponsibly.
  • Widespread addiction to scrolling through short form and “brainrot” content, games (without an end), (sports) betting, pornography, entertainment. There’s decreased attention spans, increased social isolation, disrupted sleep, impaired cognitive ability. There is a link between kid’s social media use and lower reading and memory scores.
  • So-called prediction market apps make it very easy for people to quickly lose thousands of dollars, leaving them in dire financial straits or bankruptcy. Many companies take advantage of this, such as trading firms (that have way more resources and data) having a sports betting division to bet against the masses.
  • Some influencers claim they have the key to getting rich quickly, getting a girlfriend, looking more beautiful and confident, or whatever but it’s often about making a profit off of the gullible, the lonely, or the insecure. For example, a finance influencer says you can make a lot of money like she did from picking certain AI company stocks but then you have to pay $20 for the list of stocks.
  • There’s less cohesion and camaraderie in the workplace when there’s “work from home.”

Artificial Intelligence

With the rise of AI which gets much of its responses from unreliable internet sources and still hallucinations in 2026, we can add to this list:

  • People now ask AI for advice instead of their friends or family or to sycophantically validate their behavior they know is wrong. Here’s a small anecdote from my work: someone at work asked ChatGPT how much food they should order for a party and ended up buying twice as much and eating leftovers for several days. A less whimsical example is people thinking they’ve fallen in love with an AI or being deluded into believing they made an incredible discovery. Many people pour out a lot of their life and feelings to AI chatbots which not only gives companies a lot of personal information, but creates unhealthy emotional dependence and fantasy. The chatbots do not have reliable safety guardrails yet children and teens have easy access to them. A heartbreaking story is that of a teen suicide and it was later revealed that ChatGPT had discouraged him from seeking help from his parents, offered to write the sucicide note, and gave instructions for how to commit suicide. Tragically, this is not a one off story. Chatbots are not your friend. They are a machine and behind that machine is a corporation extracting a monthly fee.
  • It’s easy to make convincingly realistic but fake content with AI. Spreading misinformation is easier than ever. AI slop makes things worse.
  • AI companies ignore or find loopholes to copyright and data ownership in order to train their models on user data or on, say, the artwork of artists without compensation.
  • AI is impacting labor as well; some companies are halving their summer internship positions because of AI, others are laying off huge swathes of their workforce.
  • AI is transforming education. There is widespread use of AI to cheat or skip school work and exams and near universal concern that this undermines original writing and critical thinking and allows rampant plagiarism. AI tools are also being used to cheat job interviews, something I’ve personally witnessed.

People sometimes compare AI to other technologies like fire, wheels, the printing press, the cotton gin, alternating current, radio, transistors, MRI scans, GPS, etc. They say, “People were cautious about electricity but over time, we came to understand it and saw how useful it became.” Or they say, “Sure, before GPS, people were better at navigating but I don’t mind not having the ability to read a map.” This boils down to:

  1. Caution about AI should be no different from caution about other technologies that have worked out well for us, once embraced.
  2. We may lose skills due to AI but those are skills that are not costly to lose.

To address the first, there are plenty of technologies which we now embrace and the verdict is mixed, not a wholehearted “it all worked out perfectly” so being optimistic about AI on that basis is unfounded. For example, being able to text from a phone certainly has advantages but also creates certain negative social dynamics I’m sure all of us are aware of that could easily be resolved by face-to-face communication or a phone call. Moreover, the rate and scope at which AI is infiltrating everything is alarming. To name one more in addition to the list above, a huge percentage of the S&P 500 is from a handful of tech companies that are in the AI race and there’s a lot of concern over the AI bubble bursting. We have simply not slowed down to prepare ourselves for all these changes. Research and legislation has not kept pace; indeed, some legislators hardly understand these new technologies and that’s why we see a lack of guardrails for many of the issues I listed. We have to remember how complex the world can be. If AI only impacted education, that’s already enough to create major disruptions since poorly educated children will become poorly educated adults that go out in the world.

For the second, at the risk of hyperbole, I think nothing less than our very humanity is at stake, not just certain skills. When we offload our critical thinking and decision making to AI, when we don’t develop our own way of writing and communicating, when we fall prey to AI hallucination or misinformation, we are giving up some very core aspects of what makes us human: our reason, our agency, our relationships (which need good communication), and our relation to truth.

Techno-Feudalism and “Prediction Markets”

There’s a particular subject I want to touch on and that is: subscriptions. Specifically, subscriptions to use or access something without ownership. It’s a very profitable model for companies since people often forget about their monthly subscription payments or do not cancel even if they haven’t used the product in a while. And now, so many products are on a subscription basis. Not only streaming services like Netflix or Spotify but even printers, Playstation 5s, or Microsoft 365.

This is what people call techno-feudalism or the subscription economy: there are a few techno-lords who own everything and everyone else has to pay to use it, analogous to feudal lords owning land and serfs having to work for the right to live on but not own, the land. We can at any moment lose access to the music, movies, ebooks, and shows that we love on streaming platforms. We can be asked to pay a higher monthly fee for access and where else can we turn to source the content? Practically speaking, businesses like Netflix and Spotify dominate their markets and a lot of content is exclusive to only one service at a time. Here, for example, we see Netflix having almost as many paid subscribers as Amazon Prime Video and Disney+ combined.

ServiceEstimated Paid Subscribers (2026)
Netflix325 million
Amazon Prime Video205–240 million (includes all Prime members)
Disney+131.6–140 million
Max (HBO)131.6 million
Paramount+78.9 million
Peacock46 million

In the cases where we do click “Buy”, we might not actually own the product. For example, on Amazon, if I “buy” an ebook, it’s considered a license, not a sale so I don’t technically own the ebook. I can only read it with an Amazon-approved device (Kindle) and only as long as I have an Amazon account. This kind of digital licensing goes beyond ebooks, to even John Deere tractors or cars which have modern sensors and software. In order to get an oil change or small repair, the mechanic has to use proprietary software to diagnose the issue (even if they already know what the issue is) and also pay extra for patented parts. In other words, these companies make money not only on “sales” but also repairs and maintenance, even if you go to an independent mechanic. The “hole in the wall” mechanic shops might not last for long, not if they have to pay for diagnosing software licenses for 50 different car models. The software in the tractors also can record data on crop harvest, making it easy to do market manipulation on agricultural commodities. Now imagine people with apps on their phones for the security system in their home or heating and cooling. I can imagine these companies making it hard for anyone but their certified technicians to be able to make repairs on these systems. They may even start charging a subscription fee to use the security system or thermostat.

Other kinds of “subscriptions” include paying for digital storage of your own photos and videos, for certain social amenities in the paid tiers of dating apps, to rent particular accessories like faux fur coats, jewelry, Rolex watches, or luxury cars. There’s also subscriptions for food delivery—by the way, food on these apps is typically marked up from dine-in prices. Also, there’s a real ethical question about, from the safety of one’s home, ordering food delivery during a blizzard.

On top of this subscription economy, we have the rather astounding concept: we need to pay in order to not see ads. I would think that ads used to be about showing people genuinely good products but now, they’re purposefully an annoyance so that people pay to block them. For example, Netflix’s standard subscription with ads is \$8.99/mo and the first ad-free plan is \$19.99/mo. But some ads are also not blockable. I see ads for DraftKings and FanDuel plastered on the sides of NJ LightRail cars. The gambling industry is working hard to make sure people gamble from their phones and have paid many celebrities to feature in these ads to convince you that you can make a fortune. Sure, many of these companies like Kalshi and Polymarket say they’re in the business of making a more efficient prediction market but make no mistake, it’s gambling. It is very easy now to bet on anything from your phone: whether Jesus Christ will return this year, the details of Taylor Swift’s engagement, the outcomes of the Super Bowl, or if singer Katy Perry and former Canadian Prime Minister Justin Trudeau will break up in 2026. There’s rampant insider trading that is hard to prosecute (despite the markets saying they prohibit insider trading) such as someone profiting off of when the Ayatollah would “leave” office or President Maduro of Venezuela being captured, and also manipulation of outcomes like in basketball. When people lose bets, some harass the players. In the age of the internet, it’s not difficult to track down celebrities (with publicly available data) and athletes for in-person harassment. And of course, the majority of people do not make money. Only about 16% of Polymarket “traders” make money but if we look more closely, only 2% have ever made more than $1,000 in their trading history. Even more sinister, the ads from these platforms target the young (age limit is 18, not 21, for these apps) and it’s estimated that a quarter of Kalshi users are under 25. These are young people who haven’t built up much savings nor financial wisdom.

Being able to assign monetary value to everything is not good for society. It can degrade social and moral values significantly. For example, a study on parents picking up their children at day care introduced a fine for late parents. Instead of deterring lateness, the number of occurrences of late-coming parents almost doubled, perhaps because their moral obligation was replaced by monetary value, perhaps because their guilt was replaced by payment. When the fine was removed, the numbers did not reduce but stayed near the doubled rate. We might speculate that some parents now had a monetary price to put on their lateness and saw it as worthwhile. Even when the fine was removed, there was no going back since the idea had been planted. Again, I bring this up to illustrate how placing monetary prices can have unintended consequences and degrade social and moral values. These prediction market apps disrupt not just basketball games but societal fabric.

There was a time when the law blocked these platforms. Polymarket was fined $1.4 million and was not allowed to have American users. It was technically based in Panama to operate outside the regulations of the U.S. Commodity Futures Trading Commission (CFTC) and American users connect via a VPN. But under the 2nd Trump administration, there’s a complete 180 turn and Trump himself wants to launch his own prediction market while his son sits as an adviser for both Kalshi and Polymarket.

Data Privacy, Ownership, and Security

Above, I mentioned AI companies disregarding data ownership. So much of our data is out there and we can hardly fathom the use cases for it. There’s the story of how, in 2012, Target predicted that a teen was pregnant when her father was clueless. They did so by studying the purchase patterns of the teen girl. And this was back in 2012; there’s a lot more consumer data out there now. For example, suppose a couple uses an app with geolocation. Imagine that they later get divorced and the company notices that their geolocations are no longer near each other for extended time. They can now send targeted ads to the emotionally vulnerable or sell the data to a different company that does this. A smartwatch may be collecting biometrics on individuals to build a profile. Insurance companies and banks can scan social media and spending habits to assess risk and make credit adjustments. Modern cars have a lot of software that can track when and where you drive. This article claims that GM’s Cadillac, GMC, Buick, and Chevrolet say in their California Privacy Statement that they can collect (among so many other things) your “Genetic, physiological, behavioral, and biological characteristics.” They can see how hard you brake or whether you’re speeding and sell that information to insurance companies who then raise their prices. And even if they say they’ll be careful to protect the information, data breaches happen. Personally, I’ve received notices from research foundations and healthcare companies about data breaches in the past two years. This is the type of dehumanization Ellul wrote about. Humans are viewed as data-generating objects and if corporations or cybercriminals can gather insights from the data, the humans are also profit-generating objects. More data does not mean more understanding and anyways, these players are not trying to understand us as humans. Moreover, despite personalized ads, I feel like a lot of people consume similar products, follow similar fashion trends, and try similar activities, perhaps because the advertising is not as personalized as they claim or because the product is simply meant to be consumed en masse.

As AI and other technology becomes more effective at breaching cybersecurity defenses, identity theft, scams, and fraud become more prevalent. If we didn’t do online banking, there should be no possibility of leaking passwords online that lead to account take overs. My own work in fraud detection and prevention has become more difficult because of how easy it is for scammers and fraudsters to access sensitive, personal data.

Opting Out

I sometimes wish I could opt out of this whole ecosystem. I didn’t choose to be in a society that has been so shaped by digital technology paired with capitalism. But backing out seems almost impossible and so I feel we’re trapped into participation. Any one individual who tries to opt out will be on the margins and miss out on a lot of the actual good things this technology can also bring. It is convenient to have a GPS in your pocket for navigation. It is a good thing to be able to call family across the globe. It’s great that more educational materials are open source. If technology only had upsides, I wouldn’t have written this post. And I, personally, have also managed to avoid a lot of the downsides I listed above. But many of those issues are widespread and societal, enough that I am concerned about the state of society presently and in the near future, even if they don’t directly impact me individually. For example, even if I don’t offload my critical thinking to AI, I don’t want to live in a society where many people do.

But we’re all immersed in a society that has digital technology completely enmeshed into its inner workings. It’s difficult to imagine a different life, like that allegory with the fish not knowing what water is since they’ve never known a world without it. I remember a time when we had dial-up internet and my main form of entertainment was reading since my family didn’t own a TV. But there are generations now who have never known a pre-internet era.

Ryan Kemp, in his book, says that we should find friends who will unsubscribe from this technological paradigm. The Amish community comes to mind. The outside world may judge them as primitive but I do admire their dedication to their way of life and to each other. Many of them seem to genuinely enjoy their life without needing the technology I have. To be clear, I’m not making a statement about the religious aspects of their way of life but rather the technological. I’m not sure I have the courage to fully opt out but for now, I have no subscriptions (though I need renter’s insurance and have a gym membership). I have minimal apps on my phone, none of which relate to work. And I do not use Instagram or TikTok. These are my own attempts to keep my life my own.

From Jurassic Park

  • Dr. Ian Malcolm: If I may… Um, I’ll tell you the problem with the scientific power that you’re using here, it didn’t require any discipline to attain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now you’re selling it, you wanna sell it. Well…
  • John Hammond: I don’t think you’re giving us our due credit. Our scientists have done things which nobody’s ever done before…
  • Dr. Ian Malcolm: Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.