I was going through my writing recently, which is very backlogged (to journal, I use Notion). There, I found a piece about human-centered technology👶 Human-centered technology prioritizes people, focusing on human experience, preferences, and benefits. Many definitions online are elusive and promote paid courses on human-centered design. My view is that it should save lives or benefit humanity, especially the end-product it delivers. For example, Climate Tech, Green Tech, Health Tech, etc. and the barriers I’ve faced trying to make it.
Since authoring it, I have grown past the thinking I once had – that I must make my software “change the world.” 🙄 I write about this more in a comparison post where I conclude that well-made, practical solutions (e.g. human-friendly technology👩🌾 Human-friendly Technology is a phrase I coined to emphasize a human-first approach to making practical technology that does not require “save the world” outcomes. I think it is done through a long-term commitment to values and establishing accountability systems to uphold them, like “Building in Public”.) are cool, too.
Below, I will explain human-centered technology, my frustrations with it, and what we can expect going forward.
First, I redefined human-centered technology because I was dissatisfied with existing definitions.
Originally, I was looking for a way to describe technology that changes the world or benefits humanity in some obvious way. The closest thing I could find was human-centered technology👶 Human-centered technology prioritizes people, focusing on human experience, preferences, and benefits. Many definitions online are elusive and promote paid courses on human-centered design. My view is that it should save lives or benefit humanity, especially the end-product it delivers. For example, Climate Tech, Green Tech, Health Tech, etc., but definitions online seemed to favor only a people-first approach to designing software. The definitions did not focus enough on the outcomes, such as the end product or how building it affected the people who make and sell it.
Be warned, I insist that human-centered technology should be focused on outcomes. With this context, I will share my view of what’s difficult about making human-centered technology, as it was written in my journal:
Compounding Constraints: Passion, Time, Ethics
On the topic of building digital products from scratch, I wrote:
“I have acquired a lot of skills that make building a SaaS application seem logical. But frustratingly, I don’t want to build most SaaS apps.”
Why? I say that passion, time, and ethics are compounded constraints on building my next product.
Passion: Apps or digital platforms don’t tend to solve problems I care about — ones that improve human lives in the real world.
Time: Investing time and energy for 5-10 years and taking on business and financial risk is a waste if I don’t believe in it.
Ethics: There are ethical limits to what problems I can responsibly solve. This is especially constrained by skillset, interest alignment, subject matter and domain expertise, cultural relevancy, etc.
I appreciate that my frustration was not just about some glorious pursuit. I recognized that there would be risky financial and lengthy time commitments. Laughably, however, 5-10 years probably will not solve any major world issue.
Fortunately, I did capture another important barrier to making this kind of software: You need to be “the right person” or have “the right team” to take on certain goals. Humans have problems, sure, but being a domain expert or a person who has suffered from that problem makes you more qualified to take on more impactful projects. When your technology greatly impacts people, there’s more room for error (like leaving some groups out)…and more critics!
Previously, I would abandon product ideas when they weren’t human-centered enough in what they would produce. Ideas can seem cool but may be harmful!
I’ve always wanted to make a dating app. By the way, I am happily partnered, and I’ve never used a dating app. Now let me passively sell you my idea, which I have not built yet.
(Ask me for more details. It’s just sitting in my “idea machine” notion database. Waiting for you to ask about it. People, my friends, like my idea, I swear).
Anyway, the issue is that it’s hard to come up with a business model that will ensure that its incentives favor the people who use the app. They are costly when used by many people, meaning they need to make money to sustain themselves.
To solve concurrency, my pitch is an app you must wait to use. I sell it as you wait to “prove you are time-rich” because everyone is busy nowadays.
I am, apparently, still waiting… to build it. 🤣 That’s because, until I can “become the right people” to build this app, plus figure out how to fund it, I cannot guarantee that it will stay a cool, ethical place… which dating apps are known for being! 🤫
Anyway, back to big, lofty goals that help humanity. I just don’t believe an app, especially a singular app, will solve many of today’s biggest problems.
In fact, I do not know of one solo software product that changed lives for the better.
It is frustrating to perpetuate the idea that one software solution will be so impactful, eg. world-changing or “game-changing” (fun fact: in the past yearThis post was written August 2024, so the time referenced here is August 2023 to August 2024. The link will work in your present time. I wonder if the number will grow?, there have been nearly 50 million news articles about AI being game-changing). Who are we trying to convince?
We are aware of this delusional thinking when we hear billionaires refer to the “metaverse” as the future. Side note: I really like gather.town, which is technically metaverse tech.
Their goals seem out of touch because the technology proposed tends not to solve significant human problems. Or the solutions proposed are obscure and go against human psychology.
For example, Elon Musk predicted language would disappear in favor of telepathy technology. This does not seem to consider the human experience and instead abstracts away from what’s nice about being human in favor of … I don’t know what is favorable about people in my head!? And I like hearing my loved ones’ voices out loud!
If you can think of a singular software product, let me know. If you say “Google Search,” “iPhone” (reminder: this is a physical product, too), or “ChatGPT,” you might consider reading Ben Thompson’s blog post, The Great Flattening. He writes about the impact that many big companies and their products have had on our once-decentralized internet:
“The result is that consumers have access to anything, which is to say that nothing is special; everything has been flattened.”
It’s worth reading as he highlights each major company’s role in this flattening. The Hacker News comments section goes off on it, too.
The future of human-centered technology will be riddled with smaller, niche projects that collectively address the world’s problems.
So, if not singular world-changing apps, then what? Probably, we are going to see a lot of different software solutions solve specific problems really well. It will be through their collective effort that we change the world together.
This past week, I found a cool project, Seafoundry, that uses AI to restore the coral reef. Google Translate is another classic example of a niche, useful application that is part of the collective good. Singular, world-changing apps are probably a thing of the past, or better yet, a complete myth. However, we will probably solve problems more efficiently when we narrow in rather than casting wide nets.
Not all technology is going to be human-centered, and that’s okay. Shoot for human-friendly.
These days, I can “settle” for human-friendly technology👩🌾 Human-friendly Technology is a phrase I coined to emphasize a human-first approach to making practical technology that does not require “save the world” outcomes. I think it is done through a long-term commitment to values and establishing accountability systems to uphold them, like “Building in Public”.. This approach is about striving for human-centeredness at the macro level (where possible), living by values institutionalized in accountability systems, and ensuring that humans are the focus of its product features. In the end, hopefully, you built a really great analytics tool or calendar app. It’s okay to make practical things, better than anyone else is making.
Remember, you don’t have to build it. You can say no to the bad idea!
In the past, I’ve opted not to build certain applications because I did not “want to be in the business” of “x” or “y” things. For example, I previously mentioned the dating app idea. I’ve said no so far!
Anyway, this was especially the case when I’d find some really nasty way it might get used. For example, I thought of a chat application that allows people to chat if they are nearby. This seemed interesting for people who live in the same building, meeting up for entertainment, etc., but I ruled this out because it might enable illegal marketplaces, harassment, and other misuses that would be hard to prevent. I am still glad I did not make this, but partly for other reasons – to be saved for another time!
What ideas have you said no to? Explore more prompts or writewithme.