Like most people who will read this I spend a lot of time with startups. Something startups tend to agonize over is what to do vs. what not to do. They also further analyze those decisions by considering what will scale versus what won’t.
Multiple times a week I hear “that won’t scale, we shouldn’t do/be doing/try it that way.” That’s true, most of the time. It also assumes whatever first version of an idea you’re working on is going to be successful and scale, though.
Everything isn’t successful enough to scale. Most technologist don’t want to admit that when deciding to build new technology, they think will be used millions of times and will always be successful.
A lot of ideas die at ideation because of this and I think that’s a mistake.
I honestly think that’s at the crux of deciding to do something that you know won’t scale and deciding not to do something at all. The consideration of whether or not you can automate a core process if the idea works feels like something to at least consider.
I always like to consider the path where a process won’t scale exponentially but will scale linearly as a starting point. When mapping out how the process will scale linearly it normally involves adding actual people to support the process. I also think that’s ok.
I’ve often found that automation built too soon ends up being painful to fix. Automation built too late tends to be demonstrated by growing teams of people doing the same task where the financial model begins to make assumptions about what can’t be automated because of fixed ties to growth. Those assumptions are recognizable when they become dogmatic assumptions. Be it number of customers, revenue, or some other metric that is important to your company.
Where dogmatic assumptions are being developed there are also opportunities for automation. Where adding new technical functionality can give the people driving the work exponential output.
- Linear growth has a lot to do with running a process x times without messing it up tied to x resources.
- Exponential growth has a lot to do with running a process 100x faster without making any mistakes using almost no resources.
The common misconception is that one replaces the other but they tend to live in harmony. Where a specific task that involves clicking buttons takes up 25% of a team’s time for example. If you can do that task with a computer that never gets tired 100x or even 1000x faster and give that person 25% of their time back you’ve got something really special.
That’s where developing processes that scale really matters. You don’t know what part of the process does or doesn’t scale linearly vs. exponentially until customers actually use the product.
I’m personally a big fan of developing processes where people can standardize the methods used which can later be automated before it becomes a cost center that can’t be controlled. If it’s a cost center that scales linearly for many months (or sometimes even years), that’s ok as long as you’re humble enough to consider the exponential process improvement along the way. That means assuming that your first process and system design was a starting point rather than something never be touched again.
These processes and the evolution to automation can take years and even appear as though it never happens if you’re close to it.
Being really good at automation can change how your idea scales but if you try to automate too soon you might slow some really fantastic ideas down. Anytime I hear this argument I always want to unpack it.
The world needs more ideas put into action regardless of how they are assumed to scale.