Avatar

Objection

@kranglefant / kranglefant.tumblr.com

Ideas and thoughts about software development.
Avatar

We don’t need a healthcare platform

This text was triggered by discussions on Twitter in the wake of a Norwegian blog post I published about health platforms.  

I stated that we need neither Epic, nor OpenEHR, nor any other platform to solve our healthcare needs in the Norwegian public sector.  Epic people have not responded, but the OpenEHR crowd have been actively contesting my statements ever since.  And many of them don’t read Norwegian, so I’m writing this to them and any other English speakers out there who might be interested. I will try to explain what I believe would be a better approach to solve our needs within the Norwegian public healthcare system.

The Norwegian health department is planning several gigantic software platform projects to solve our health IT problems.  And while I know very little about the healthcare domain, I do know a bit about software.  I know for instance that the larger the IT project, the larger the chances are that it will fail. This is not an opinion, this has been demonstrated repeatedly.  To increase chances of success, one should break projects into smaller bits, each defined by specific user needs. Then one solves one problem at a time. 

I have been told that this cannot be done with healthcare, because it is so interconnected and complex. I’d say it’s the other way around. It’s precisely because it is so complex, that we need to break it into smaller pieces. That’s how one solves complex problems.  

Health IT professionals keep telling me that my knowledge of software from other domains is simply not transferrable, as health IT is not like other forms of IT.  Well, let’s just pretend it is comparable to other forms of IT for a bit.  Let’s pretend we could use the best tools and lessons learned from the rest of the software world within healthcare.  How could that work?

The fundamental problem with healthcare seems to be, that we want all our data to be accessible to us in a format that matches whatever “health context” we happen to be in at any point in time.  I want my blood test results to be available to me personally, and any doctor, nurse or specialist who needs it. Yet what the clinicians need to see from my test results and what I personally get to see will most likely differ a lot.  We have different contexts, yet the view will need to be based on much of the same data.  How does one store common data in a way that enables its use in multiple specific contexts like this?  The fact that so many applications will need access to the same data points is possibly the largest driver towards this idea that we need A PLATFORM where all this data is hosted together. In OpenEHR there is a separation between a semantic modelling layer, and a content-agnostic persistence layer.  So all datapoints can be stored in the same database/s - in the same tables/collections within those databases even. The user can then query these databases and get any kind of data structures out, based on the OpenEHR archetype definitions defined in the layer on top.  So, they provide one platform with all health data stored together in one place - yet the user can access data in the format that they need given their context.  I can see the appeal of this.  This solves the problem.

However, there are many reasons to not want a common platform. I have mentioned one already - size itself is problematic. A platform encompassing “healthcare” will be enormous.  Healthcare contains everything from nurses in the dementia ward, to cancer patients, to women giving birth, orthopaedic surgeons, and family of children with special needs… the list goes on endlessly.  If we succeed building a platform encompassing all of this, and the plattform needs an update - can we go ahead with the update? We’d need to re-test the entire portfolio before daring to do any changes.  What happens if there is a problem with the platform (maybe after an upgrade.)  Then everything goes down.  The more things are connected, the more risky it is to make changes. And in an ever changing world, both within healthcare and IT, we need to be able to make changes safely. There can be no improvement without change.  Large platforms quickly become outdated and hated.  

In the OpenEHR case - the fact that the persistence has no semantic structure will necessarily introduce added complexity in how to optimise for context specific queries.  Looking through the database for debugging purposes will be very challenging, as everything is stored in generic constructs like “data” and “event” etc.  Writing queries for data is so complex, that one recommends not doing it by hand - but rather creating the queries with a dedicated query creator UI.  Here is an example of a query for blood pressure for instance:

let $systolic_bp="data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude" let $diastolic_bp="data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude"
SELECT   obs/$systolic_bp, obs/$diastolic_bp FROM   EHR [ehr_id/value=$ehrUid] CONTAINS COMPOSITION [openEHR-EHR-COMPOSITION.encounter.v1]       CONTAINS OBSERVATION obs [openEHR-EHR-OBSERVATION.blood_pressure.v1] WHERE   obs/$systolic_bp>= 140 OR obs/$diastolic_bp>=90

This is needless to say a very big turn-off for any experienced programmer.

The good news though, is that I don’t think we need a platform at all. We don’t need to store everything together. We don’t need services to provide our data in all sorts of context-dependent formatting. We can both split health up into smaller bits, and simultaneously have access to every data point in any kind of contextual structure we want.  We can have it all. Without the plattform.  

Let me explain my thoughts.

Health data has the advantage of naturally lending itself to being represented as immutable data.  A blood test will be taken at a particular point in time. Its details will never change after that. Same with the test results. They do not change. One might take a new blood test of the same type, but this is another event entirely with its own attributes attached. Immutable data can be shared easily and safely between applications. 

Let’s say we start with blood tests. What if we create a public registry for blood test results.  Whenever someone takes a blood test, the results need to be sent to this repository. From there, any application with access, can query for the results, or they can subscribe to test results of a given type. Some might subscribe to data for a given patient, others for tests of a certain type.  Any app that is interested in blood test results can receive a continuous stream of relevant events.  Upon receipt of an event, they can then apply any context specific rules to it, and store it in whatever format is relevant for the given application.  Every app can have its own context specific data store.

Repeat for all other types of health data.  

The beauty of an approach like this, is that it enables endless functionality, and can solve enormously complex problems, without anyone needing to deal with the “total complexity”. 

The blood test registry will still have plenty of complexity in it. There are many types of blood tests, many attributes that need to be handled properly, but it is still a relatively well defined concrete solution. It has only one responsibility, namely to be the “owner” of blood test results and provide that data safely to interested parties.  

Each application in turn, only needs concern itself with its own context.  It can subscribe to data from any registry it needs access to, and then store it in exactly the format it needs to be maximally effective for whatever usecase it is there to support.  The data model in use for nurses in the dementia ward does not need to be linked in any way to the data model in use for brain-surgeons.  The data stores for each application will only contain the data that application needs. Which in itself contributes to increased performance as the stores themselves are much smaller. In addition they will be much easier to work with, debug and performance tune, since it is completely dedicated for a specific purpose.

Someone asked me how I would solve an application for 

“Cancer histopathology reporting, where every single cancer needs its own information model? and where  imaging needs a different information model for each cancer and for each kind of image (CT, MRI, X-ray) +where genomics is going to explode that further” Well I have no idea what kind of data is involved here.  I know very little about cancer treatment. But from the description given, I would say one would create information models for each cancer, and for each type of image and so on. The application would get whatever cancer-data is needed from the appropriate registries, and then transform the data into the appropriate structures for this context and visualise the data in a useful way to the clinician.   

We don’t need to optimize for storage space anymore, storage is plentiful and cheap, so the fact that the same information is stored in many places in different formats is not a problem.  As long as we, in our applications can safely know that “we have all the necessary data available to us at this point in time”, we’re fine.  Having common registries for the various types of data solves this.  But these registries don’t need to be connected to each-other. They can be developed and maintained separately. 

Healthcare is an enormous field, with enormous complexity. But this does not mean we need enormous and complex solutions. Quite the contrary. We can create complex solutions, without ever having to deal with the total complexity. 

The most important thing to optimise for when building software, is the user experience.  The reason we’re making the software is to help people do their job, or help them achieve some goal. Software is never an end in itself.  In healthcare, there are no jobs that involve dealing with the entirety of “healthcare”.  So we don’t need to create systems or platforms that do either.  Nobody needs them. 

Another problem in healthcare, is that one has gotten used to the idea that software development takes for ever. If you need an update to your application, you’ll have to wait for years, maybe decades to see it implemented. Platforms like OpenEHR deal with this by letting the users configure the platform continually. As the semantic information is decoupled from the underlying code and storage, users can reconfigure the platform without needing to get developers involved.  While I can see the appeal of this too, I think it’s better to solve the underlying problem.  Software should not take years to update.  With DevOps now becoming more and more mainstream, I see no reason we can’t use this approach for health software as well.  We need dedicated cross functional teams of developers, UXers, designers and clinicians working together on solutions for specific user groups.  They need to write well tested (automatically tested) code that can be pushed to production continuously with changes and updates based on real feedback from the users on what they need to get their jobs done.  This is possible. It is becoming more and more mainstream, and we are now getting more and more hard data that this approach not only gives better user experiences, but it also reduces bugs, and increases productivity. 

The Norwegian public sector is planning on spending > 11 billion NOK on new health platform software in the next decade. 

We can save billions AND increase our chances of success dramatically by changing our focus - away from platforms, and on to concrete user needs and just making our health data accessible in safe ways.  We can do this one step at a time. We don’t need a platform.

Avatar

Just coding it like it is

Our coding environments are going too soft! Everywhere you integrate, programs are like 

400 Bad Request

SOAPFaultException

So sensitive. Bloody snowflakes.  

Well, it’s not my problem that you can’t handle my JSON. I’m just coding it like it is, man!  I know what my message is, there’s no problem with my message. My message is superior to yours in every way.  If you can’t handle my data, if you can’t handle the truth, that’s your problem not mine.  

But is it though?  

We don’t argue like this about code, do we? Why not? Because in coding we actually have to be rational.  If we want our applications to be effective, we have to consider the message protocols and formats of the services we are integrating with.  Even if we don’t particularly like them.  It makes no sense to send off messages in the wrong format, and then get all superior and defensive about it when we get an exception.  

“Bad Request? BAD REQUEST? That’s the best request you’ve ever seen in your life! What’s wrong with you? Calling MY requests bad, shame on you, you entitled little shit”   

Ridiculous, right? Right? 

Surprisingly, this is true for human communication too. You can’t just spew out messages the way _you_ like them, without any consideration of the audience, and then get angry when they respond in the only way available to them. 

Well, you can, but don’t expect great success with this approach. 

RTFM people, get to know the APIs you’re calling if you want to have any success calling them. 

Avatar

Reuse? Refuse! 9 minute rant on reuse done wrong

Avatar

Growing pains

“Being a grown-up is basically googling things and being tired all the time” I read somewhere.  Yup, that pretty much sums it up.  At least after getting kids, I just never seem to have enough time.  I’m exhausted,  I’ve always got like 10 things going on at once.  And I’m not the only one.  Does it have to be this way? Don’t you sometimes wish you could just be bigger? That would be great, wouldn’t it? Like King Kong. That would make my life so much easier!  Arms 5 meters long, feet the size of walruses, a mouth big enough to bite the head of a fully grown manatee.

Wait... WAT? 

No, that obviously wouldn’t help at all. OK, I guess the extra size would mean I could just stomp on, or eat, anyone who imposes extra work on me.  But as for normal completion of the work at hand, added size gives no extra benefit whatsoever.  Less work might help.  More people might help.  Growing bigger, even if this were possible, will obviously not help at all.  It will likely make things a lot worse.  With your head towering five floors above, you will have trouble picking up on what’s actually going on closer to ground.  Preparing meals for your family will be a bit of a hassle if you don’t even fit in the kitchen.  Unless your goal is to intimidate or harm, growing bigger is not such a great deal.  

So why is this our go-to approach when scaling business and software?  In organisations: we start with a small team. A visionary manager - the head, an engineer - the arms and legs, a marketing person - the mouth. A well functioning organisation.  Everything is efficient and exciting. Then as more and more work comes in, we scale this structure. An engineer becomes an engineering department, the market/sales-person becomes the sales department, and so on. Before long the organisation has turned into a bureaucratic Godzilla. 

In software - we start with a simple view, some business logic and a persistence layer. A sensible division for a single purpose small application. Then as time goes on and more and more features are added, we keep and grow this structure.  Instead of dividing our code up by functionality delivered to users, our application’s package hierarchy still starts off with “views”, “controllers”, “models”,  and so on with few clues about what problem the application actually solves. If and when our monolith finally gets too big, we typically keep partitioning along the same lines.  We end up with a “frontend team” and a “backend team”, a “rules engine team” and so on.  You’ve gone from a giant Godzilla monolith to more of a Power Ranger Zord thing or whatever it is called (- you know when all the power rangers join forces and become this big awkward-looking super-creature?) I don’t know if this is that much of an improvement.

A lot has been said about Conway’s law, and what I’m talking about kind of relates to this principle.  But while Conway describes how (large) organisations end up creating software in their image, I’m talking about how small organisations and code bases grow into large dysfunctional bureaucracies. This is as big a problem.  Our default strategy for coping with more work, is to grow the existing structure, adding bureaucracy.  Instead of splitting the problem up and creating new distinct organisations to handle distinct problems. 

Everyone knows that large bureaucracies (in code or otherwise) tend to be slow and hard to work for/with.  Change is a nightmare.  Every change necessarily needs to fit into the whole in a way that doesn’t harm any of the other parts of the system.   Just thinking about a potential change can take months! Implementing it can take years, with costs in the millions.  If, on the other hand, your organisation is small and focused, only handling a limited number of tasks, it’s easier to reason about the effects any potential change will have.  Changes won’t impact other organisations, so decisions can be made locally and implemented immediately. No need to go up a long chain of management to get approval. 

Being a large bureaucracy isn’t all bad of course.  Losses in efficiency are mitigated by the fact that, as already mentioned, given their size, they can can just stomp on, or eat, any competition.  Thus obviating the need for improving performance.  Large bureaucracies also help fund business management consultancies - who would otherwise go bankrupt.  And they give jobs to large swaths of management who would otherwise be out of a job.  These are noble pursuits of course.  You can’t say an action is dumb until you know the intent behind it.  If the intent of growing your current organisational structure (of code or people) is to  a) employ the maximum number of people and fund your buddies in the management consulting sector - or  b) crush all competition by sheer size - or  c) you find endless meetings and coordination fun and find quick paced development stressful -  then you might be on to something.   But if you want efficiency, the ability to move fast, grab opportunities,  deliver exactly what your customers want, then you should take some time to think about how you’re growing.  

When a new requirement comes in for your software - don’t just blindly bolt it on to the existing application. Think about how this new feature will be used.  Will it be used by the same people at the same time?  Are you creating datastructures where only half of the attributes are in use in any given setting?  Are you reusing components in a way where everyone relies on component X, but they all use it in a different way?  This is a code smell. This smells of King Kong and the BFGs whizzpoppers.  Stop blindly growing your existing structure. Actively look for ways of creating new ones instead.  New ones that are free to evolve in whatever way they need to. Leaving you the ability to evolve what you’ve got in whatever way it needs to.  It might be easier in the short run to just keep adding features,  but it is likely to slow you down in the long run.  And speed is of the essence if you want your business to stay alive in today’s fast moving marketplace.  

If you’re good and fast enough you might just get lucky and get bought up by one of those large bureaucracies :-) 

Avatar

Why and when blocking matters

So last year I worked on some Vertx.io code.  I'd never come across it before, but the documentation was great and it worked extremely well.  But I just couldn't get myself to like using it.  I really disliked working with the asynchronous non-blocking APIs. They were so cumbersome to work with. In every way.  Like driving a Lamborghini - fast, great quality, but just not very easy to use.  In fact, downright impractical in many ways.  Do we really have to put ourself through this to get performant code, I thought to myself.  Then I read this post on concurrency by Joe Armstrong. Hah! I thought to myself. I’m not the only one who reacts to this stuff.  Why can’t we do concurrency like Erlang in the JVM?  Why does blocking code have to be so problematic?  I couldn't find any texts that really explained it in a simple and satisfactory way.  But after lots of reading, thinking and experimenting I have finally sorted out my understanding of the subject and thought I'd share my findings.  In case there are others like me out there who feel confusion around the issue.  So here we go.

Let's start at the beginning - bear with me.  You've got a CPU. It can process one instruction at a time. If you want multiple tasks running simultaneously, parallell execution, well, you can't.  However, since each task consists of a list of instructions, the CPU can interleave instructions from one task to another, giving the impression of simultaneous execution, at the cost of some reduction of performance; concurrent execution. The instructions for each task is executed on what we call a thread.  The CPU scheduler allocates time for each thread according to various rules so each one progresses in a more or less predictable manner.  Sometimes the instruction in a thread is simply: "wait for a response coming out of the end of this socket". The thread is blocked, we say then.  There is nothing for the CPU to do so the CPU simply moves on to the next thread immediately.  Blocked threads do not block the CPU in any way - it always keeps moving. There is no blocking. I repeat: there is no blocking.  Blocking is simply a mechanism to signal that the CPU can move along to the next task. 

for (int i = 0; i < numThreads; i++) {    new Thread(()->Thread.sleep(1000)).start();  }

(Yes, yes, yes, I know that Thread.sleep throws an InterruptedException, so this won’t compile, but let’s just, for the sake of readable code examples, pretend it doesn’t, OK?)

Thread.sleep is a blocking call. But that doesn’t matter at all for the performance of this code.  This code will complete in 1 second whether numThreads = 1 or 1000.  

So why are we spending so much time talking about non blocking code? If the CPU is never blocked anyway - why is blocking a thread such a big deal? Well it isn't, as long as you don't have too many concurrently running tasks. If your web server (for instance) only ever has 500 concurrent users, you can have a thread pool with 500 threads, and you won't ever have a problem with the threads blocking. True story.

But what if you have 10 000 concurrent users? Now you might have a problem. But the problem is _not_ that your threads are blocking. Blocking threads are not the problem.  The problem is that you can't have that many threads.  Try the sleep code example above with numThreads = 10 000. You won’t see a performance degradation, you’ll see an OutOfMemoryException.  Each thread takes up a considerable amount of system resources.  If your concurrency strategy is based on threads, you are limited by the number of threads your system can handle. 

Basically, your OS has implemented a concurrency mechanism that does not scale.  Why can the Erlang VM scale their concurrency mechanism when the OS can’t? I honestly don’t know enough about it to answer, but that’s the reality.  For whatever reason, your OS sucks at concurrency.  If you want scalable concurrency, you’re going to have to either choose a framework/library that helps you (like Vertx.io) or a VM that does (like BEAM - the Erlang VM).  But before you do, check what the limits are for your application and your servers.  If you’ve got a blocking java implementation that works and the number of concurrent operations do not exceed the amount of threads your servers can handle, you're sorted. There is no need for you to worry about blocking threads, ever. Don't have to solve problems you don't have.

But if you've only got one server, or a limited amount of them, and need to scale your application to handle more than a couple of thousand of simultaneous blocking operations, you need to base your concurrency strategy on something other than threads. You’ll still be using threads at a lower level, but at runtime you'll need each thread to handle multiple tasks.  _This_  here and only here is where blocking becomes a problem.  If each thread is responsible for completing, say, 1000 tasks and one of them blocks execution - this means none of the other 999 are getting _any_ work done.  Blocking one task means blocking them all. Obviously blocking is disastrous in this case.

So what do you do in practice? There are a few different approaches available out there to deal with this. One is the "single threaded event loop". Most famously used in Node.js. On the JVM, the already mentioned Vertx framework uses this strategy. Put simply, your application (like the CPU scheduler) keeps a list tasks - units of runnable code, that are executed one by one in a single thread. If you need anything done, you just add that operation to the list and it will be executed in turn.  The tasks don't take up much memory, so you can have practically any number of them. As opposed to the CPU scheduler- which can stop threads at will to switch execution over to the next thread waiting in line, the single threaded event loop system completes each task before moving on to the next one. One at a time they get completed as fast as can be. (UNLESS of course one of them blocks the thread.)  So how do you query a database? Or call a remote service? At the end of the day, if your program needs data from the database, it's going to have to wait for that data to return. Something in your code has to be listening for the response back from the database.  At some level the application is "blocked" while it's waiting for the data. At some level a socket is opened, and something is waiting for data to come out of it. What is doing the waiting for you if you're not allowed to block the thread?

Let's take a trivial web example. Starting with a traditional blocking implementation.  A servlet that queries some remote service for data before returning the result:

public Response getSomething(Request parameters) {     validate(parameters) //throws exception if invalid     String id = parameters.getParameter("id")     Person person = getPersonFromRemoteLocation(id)     return transformObjectSomehow(person) }

Here, the "getPersonFromRemoteLocation" implementation will most likely open a socket, connect to a remote socket, send a request and block the thread until the response has been completely received. With non-blocking IO, you'll still be opening sockets like before, still sending data over them like before, and still receiving data from them like before - it's just that you do it all in small increments.  For the example above to work, you need, conseptually, to turn your one code block with consecutive operations, into 3 separate tasks. First you set up your request, open a socket and send it.  

(None of this is "real" code, I'm just trying to illustrate the process in a simple way.)

getSomething(Request parameters) {     validate(parameters)     String id = parameters.getParameter("id")     SomeConnectionContext ctx = sendDataOnSocket(remoteAddress, id)     addTask(isResponseComplete(ctx)) }

This does not block. The last method is void.  You’re just adding the next step of the operation - the polling for data off the socket - to the application’s list of tasks.

The polling task could look something like this.  It reads data from the socket and checks whether it is complete yet.

isResponseComplete(SomeConnectionContext ctx){   SomeConnectionContext updated = ctx.pollForMoreData()   if (updated.isComplete()) {     addTask(resultHandler(updated.completedResponse))   } else {     addTask(isResponseComplete(updated))   } }

Polling the socket for data is not blocking - you just get all the data available so far, check if it's complete - if not, keep it and try again.  If it is complete, call the result handler with the completed data.

Finally you've got the code that handles the result.

resultHandler(Person p){    return translateToMyDomain(p); }

Each of the three bits of code can be completed without blocking. Each bit can easily be interleaved with other tasks. Your thread will always be busy, always performing meaningful work.

Performance-wise this works great. But as mentioned already, there is a major drawback. The code becomes a lot more complex.  Instead of having one easy to read list of instructions, you've now got to split everything up.  Testing becomes more cumbersome, understanding what's going on is more difficult, if you forget yourself and make a blocking call - which is really easy to do given that most of today's libraries and framework are based on blocking calls - you're undoing all the benefits of the async framework. Finally, depending on what kind of remote service you're calling, it might not even give you a real performance benefit.  If you're calling a database that only can do 500 queries at a time. It doesn't really help if your web application can do 1 000 000. The throughput of your system is only as fast as the slowest part of it.  At the end of the day, you need to way the pros and cons. Sometimes you really need the speed of a sports car, you really need a Lamborghini - like... when... hmmm... Damn it, I can’t think of a single reason you’d actually need a Lamborghini.  But let’s just say there is one.  For those cases, you just have to live with the inconveniences they come with.  

There are other concurrency approaches out there for the JVM.  I’ve already written a tiny bit about Quasar in a previous post.  Instead of providing library-style mechanisms for concurrency like Vertx.io does, they are trying to solve the problem at more of a VM-level.  Remember how blocking is not the problem when executing thousands of concurrent tasks? The problem is your limited supply of/room for threads. Quasar attempts to remove _this_ problem. "What if we could just have an endless supply of threads?"  With Quasar you write your code as before. With blocking code. But instead of running your code on a thread, it is run on a "fiber".  The Fiber class has pretty much the same API as the Thread class and a lot of common libraries have been rewritten to use fibers instead of threads. In an application using fibers, your code gets instrumented with the quasar agent before it starts up. At each blocking call, the Quasar system, like the CPU scheduler, will switch execution over to another fiber.  The result is that you can write code that looks more or less like before.  You can have a function that constructs a query, calls the database, and then processes the results all written with instructions following eachother one line after another. But at runtime, the thread running the first part of the function will not be the same as the thread running the end of the function (just like .Net’s async-await framework). So ThreadLocals and any other mechanism depending on the OS thread need to be avoided. 

I don't have any experience with running Quasar in production, I've just been playing with it a little at home. Trying to get it to beat vertx. Because I really wanted it to be better than vertx.  But I can't beat Vertx performance.  In addition to this I keep messing up and getting error messages and things crash.  Now I know full well that this may be because I suck, that I'm doing something wrong.  I have no doubt that's the case.  But this still doesn't change the fact that I got Vertx to perform astoundingly well without fail "straight out of the box".  The documentation is great and it just works.  I still really hate the syntax, but I just have to admit that everything else is awesome. So what’s my conclusion? Well you know how we used to do all sorts of weird complex stuff to optimise for limited memory? We don’t any more, because we pretty much universally have come to the conclusion that the benefits of simple code, far outweighs the drawbacks of using more memory. Especially since memory is so cheap anyway now a days.  I hope some day we’ll see non blocking code as another quaint example of how we used to have to work around limits in our coding environment.  At the end of the day - our applications, the tasks it performs, what the user sees, it will always block. When you clicked the link to this blog post, you had to wait for it to render on your screen.  The code responsible for handling a task should resemble that task as closely as possible for it to be easy to understand and work with.  Blocking code is by far the easiest to understand and thus the easiest to keep correct.   So... what concurrency approach do I recommend right now then? Should we go with Vertx despite annoying api? Should we switch to fibers? Or should we just use lots of load balanced traditional blocking web servers? Well for me the answer is easy: none of the above. Screw the JVM, I’m going with BEAM - I’m going to learn Erlang, Elixir, or LFE (lisp (flavoured (erlang))) 

Avatar

Workers idling by the coffee machine

Some times you just can't get everything done with your own employees, you need to get outside help. Contractors. Consultants. Your office space puts a limit on how many of them you can have at any time.  As they can be expensive, once they are on premise you should ensure that they can produce as much value as possible. You don't want them sitting around drinking coffee while waiting for decisions to be made.  Every minute they are billing you should be spent producing value for you.  How do you achieve maximum contractor utilization? Multitasking of course! We all love multitasking, don’t we? Ensure that there is ALWAYS some work to do. While a contractor is waiting for an answer to a question, she can pick up another task. When the answer comes in, the work can be picked up again by whichever consultant is free. Any one of them. If you limit a contractor’s work to just one task, there will be wasted time.  If you have them share the tasks between them, you can maximize their utility. Will they get exhausted and confused by all the different tasks? Who cares, they are contractors :-) To some degree we all do a bit of multitasking of course.  The “agile” idea of cross functional teams where anyone can do any task is to a large extent built on this idea.  It can work well up to a certain point. But there is a real problem with this approach - it can be much harder for you to keep a good overview and understanding of what exactly is going on. The big picture.  Who is working on task A? Who can tell me how work on task B has progressed this last month? These questions become increasingly difficult to figure out the more people have been involved and the more the tasks are broken into tiny fragments.

I am of course talking about concurrent programming and threads.  Threads are expensive and we don't have room for all that many of them, we don't want them to sit idle and wait for IO. Writing blocking code, means threads sitting idle waiting.  This is wasteful.  Writing non-blocking code however, while being efficient, all to often leads to code that is very hard to understand and work with.  The value the program is delivering -  1) receiving a request from a user  2) validating it  3) retrieving values from external services  4) merging them with the user input  5) storing it all in a database  6) returning the result back to the user These kinds of flow, have to happen in sequence, and they are hard to follow if everything is chunked up into small bits all handled by random threads with intermediate results put on queues and sendt back and forth. This: Result submitApplication(UserInput submission) {     validate(submission);     Address address = retrieveAddressFor(submission.userId);     ContactDetails contact = retrieveContactDetailsFor(submission.userId);     CompletedApplication application = CompletedApplication         .withAddress(address)         .withContactDetails(contact)         .withUserInput(submission);

    return database.store(application); }

Having it all there, in sequence, is a lot easier to understand than: void submitApplication(UserInput submission) {     validationQueue.offer(submission); }

Where the hell did the data go? What happens next? How do I test this? What are the consequences of me putting this data on the queue. It’s not obvious.

Or this monstrosity: void submitApplication(UserInput submission) {     validate(submission, ((result) -> {         if (!result.failed) {             retrieveAddressFor(submission.userId, (addressResult) -> {                 if (!addressResult.failed) {                     retrieveContactDetailsFor(submission.userId, (contactResult) -> {                         if (!contactResult.failed) {                             database.store(CompletedApplication                               .withAddress(addressResult.data)                               .withContactDetails(contactResult.data)                               .withUserInput(submission)), (dbresult) -> {                                   somehow find whatever context this request originated                                   in and send the result back to the user                               }                         }                     }                 }            }        }     }); }

It is perfectly possible to write decent applications where threads multitask efficiently, but these non-blocking approaches quickly get confusing and are often implemented poorly.  

There must be another way. There is another way. There are several. Cloud computing, load balancing and so forth can provide one answer. We can keep our simple blocking implementations and simply have more of them running. This way, the fact that one process is a bit wasteful doesn't matter as much, since we can just add more of them to get the performance we need. The cost of programming resources are dropping steadily, so this is a perfectly viable option in many cases.  Moore’s law has meant that the value of optimization in code has been steadily decreasing. It is often both cheaper and better to solve inefficiencies with hardware, rather than software.   But there are other ways. Why are we relying on “expensive external contractors”, OS threads, at all? Do we have to? What if we solved that problem? That's what Erlang has done. And that's what the guys at Parallel Universe are trying to do with Quasar. They make their own threads, lightweight threads. Quasar calls them Fibers.  We can't get away from using OS threads at a lower level, but we can abstract them away in our programming environment. Fibers are cheap, you can make as many as you want, millions.  So there is no nead to worry about any of them "idling by the coffee machine". You can write blocking, simple straight forward code, without any of the drawbacks. Lightweight threads are like having an unlimited supply of employees at a fixed total cost. Like outsourcing your programming to a low cost country. ...Except they speak your language, understand your domain and culture, share your goals and operate in your own timezone. So... quite different from outsourcing, when it comes down to it.  More like outsourcing to robots - small, cheap robots that do your work for free. Quasar Fibers have pretty much the same interface as Threads, so you can port your thread blocking code to Fiber blocking code with very little effort. The Comsat project has already provided fiber-based implementations to a whole range of technologies.  I think this looks pretty awesome! Why hadn’t I heard about this before?  I’m going to have to look into this in more detail.  I’ll get back to you with my findings in not long hopefully :-)

Avatar

Slavery v2.0

Imagine a perfect world. Heaven. Utopia. How would we spend our time, ideally?  Beside the obvious “spending time with family and friends”, I would spend my time reading, learning, thinking, making things, writing, trying to make a difference somehow… How would you spend your time? Imagine if we had more time to do what truly interested us in life.  We’d have more arts, more music, more literature, more science, more sports, more time with loved ones… We’d also have more beauty pageants and people wasting their lives watching football, but I can live with that as long as I don’t have to participate.

A century ago philosophers and politicians alike were dreaming of a future where machines would obviate the need for most of our labor, thus moving us in this direction.  We’d be free to pursue more noble causes like scientific discoveries, medical advances, philosophy, music, poetry, laughter, love…  Here’s a good text by Bertrand Russel from 1932 Where have those dreams gone?

Not only are machines now able to do much of our work for us, we’ve also increased our working population almost two-fold by adding women to the workforce.  Why are we still working as hard as we did nearly a century ago?  The thought of a future where robots do our work now fills us with dread: “Oh no! They are taking our jobs away!”  What’s wrong with us? Seriously.

Have we been completely brainwashed? I know I have been.  Every time a politician talks about employment, the aim is always “maximizing employment”. Jobs for everyone.  They talk as if this is the only possible goal any sane person should want.  But it’s not jobs we need. It’s a good supply of useful products and a means to get them. 

Imagine “Elizabeth”, a factory worker producing life saving mosquito nets.  If she stays home from work, it means fewer nets produced. Bad for the company, bad for people living in areas with malaria.  The work she does is important.  Then the company invests in robots that not only do her job, but do it better and faster by a factor of 10.  Her 100 or so colleagues and her are out of a job.  More market value is created with less manual labor.  Great!  The company now makes a LOT more money, not only from increased production, but from reduced labor costs.  Wonderful! The fact that Elisabeth doesn’t show up for work doesn’t matter now. Not for her company nor for the rest of society – because her job is being done, better, by robots.  The only problem she and we now have, is that she has no money.  The problems for her are obvious.  But this is also a problem for the more productive of us, because she can no longer buy the things we are producing.  If nobody buys our products, the we end up without money.  Without enough money in circulation, the economy stagnates. For the economy to function, we need both supply and demand.  But we don’t necessarily need jobs. Jobs are one way of achieving both supply and demand.  But now robots are automating supply.  What we need is a way of automating demand, a supply of money. Some are suggesting basic income. I find the idea intriguing.  But there are other ideas as well worth considering.  In any case we need to start realizing that we are finally entering a future where we hardly need to do any manual labor anymore.  This is slavery v2.0.  It’s beyond awesome! The last thing we need is to start desperately looking for more “work”.   We don’t need more 9 to 5 office work maximizing profit for shareholders.  Seriously people, if intelligent aliens arrive, we have some explaining to do. We’re not making any sense. Stop obsessing about everyone working full time.  Let people indulge in creating true value, like music and literature, art and scientific discovery.  Let’s start dreaming again. 

Avatar

Tear down this wall!

“We can’t work as efficiently as smaller companies. We’re just too big. We’ve got too many systems, too many customers and too many employees.”

If I had a dollar every time I heard something along those lines.  Too big. Bullshit. Your problem is not your size, whether measured by customers, systems or employees.  Your problem is how you manage them. Centralized control does not scale well.  That’s your problem. You’re trying to control all of your systems, all of your employees and all of your customers the same way.  That’s what’s slowing you down.  5 year plans and centralized control: this is software development communism.

Instead of letting go and delegating control to the various application teams, you obsess over creating a constellation of applications that looks nice on a diagram. A diagram that lets you understand the entirety of everything and how it’s all connected. One integration platform. One process engine.  Instead of every app having its own domain model, you create one common one for all to use.  Much easier for the person at the top to keep track of.  Worse for everyone else.

Simple example.  Say you have several applications that all need to generate, archive and distribute documents.  Today they all have their own separate code to do this. Duplication! The cries go out. We need to create a common document handling service they can all use.  Ok, NOW you have a problem. Because now you’ve got coupling.  And coupling is a much worse enemy than duplication.  The number of systems doesn’t matter nearly as much as the amount of coupling you have between them.  Instead of 3, say, separate, standalone apps that can be updated at will and tested in complete isolation, you now have 3 apps that are all connected to the same service. What if app 1 needs that service to behave a little bit differently in certain cases? To make this change you now need to retest your entire portfolio to ensure your new change didn’t break anything for the other connected apps.  Alternatively, you can make the change in the client app itself, thus allowing document handling logic to seep out of the document service and exist in both the common document service AND each individual application.  Over time, typically both things happen.  The common service grows and grows and gets more and more complex and configurable.  Each client app only uses a small fraction of the functionality provided.  It also gets harder and harder to maintain, as the intent of the code is hidden in configurabilty.  At the same time a substantial portion of document handling logic is added to each of the client apps to compensate for the functionality that has not been able to be added to the common module.  A complete mess.

If you don’t create a common service, you will most likely have a bit of duplication. This means that in the case where something needs to be updated – say the archiving system is upgraded - the required change will have to be implemented several times over. Every system using the archive must be updated.  This is annoying manual labor. But it is not difficult.  Writing and maintaining a common module that has to work for every possible client, requires less annoying manual grunt work, but it is very difficult.  Know what tradeoffs you’re making.

Sometimes common services pay off of course.  If they’re small, and only do one thing and do that same thing for all clients, they can be very valuable.  Which brings me to my second point.  Don’t worry about the connections looking messy when you draw every app and service in one diagram.  If there are 400 applications, and each one is connected to 3 other ones, it’s going to look appalling. Don’t draw that diagram! The only one interested in that diagram is the enterprise architect. Screw him (or her). The architect is far less important than your end user.  The purpose of your system is to provide meaning and value to the end user, not your enterprise architect.  Any one user will never be using all your systems. They will use a couple of them. Those are the diagrams you need to focus on. That’s where your energy should be spent. Focus on what the users need, and how the data they are providing or receiving is passed through the part of the system they are in contact with.  If that diagram is full of lots of systems, then you might have a problem. If for a user to enter some data, you need to go from a user interface app, to a process engine, to a document handling service, to a queue, to an archive and then back again.  That might be something you’d want to simplify.  This is the only context in which you should worry about the number of systems and the amount of coupling.  But reducing the amounts of systems for each user, will increase the number of systems at the enterprise level (and vice versa). This is why the enterprise architect doesn’t like it. We need to remind ourselves who we are really working to please. Our users or our architects?

Another argument I hear a lot in favor of introducing common systems, and reducing the number of applications is that “users dislike having so many systems to work with”. No they don’t! Ask anyone – what do you prefer: SAP (one system that does it all) or the iPhone with a multitude of apps downloaded from the App Store? They prefer the iPhone.  There are hundreds if not thousands of apps for every kind of need. But this is much better than the alternative: One app that serves everyone badly.  This idea that users dislike having many systems is bogus.  You know what they don’t like? They don’t like having to remember lots of usernames and passwords.  With a common authentication system (yes, I’m advocating reuse!) that problem is solved. What users don’t like is bad user experiences. It’s much easier to create good user experiences if you’re allowed to create several tailored small apps that serve very specific purposes.  If you only have one app, and it has to do everything imaginable for everyone, it’s not going to be very user friendly.  

So, no, I don’t accept your excuse.  Large companies are not inefficient, slow and bureaucratic because of their size. They’re inefficient, slow and bureaucratic because of their communism. Mr. Enterprise Architect; tear down this wall!

Avatar

Future proof

“It is a key requirement that the system be able to cope with future changes”

Heard this before?

“We don’t want a system that will be out of date within a year or so, forcing us to spend heaps of money just maintaining it and keeping it up with current demands”

I’ve certainly heard this enough. What the customers want is software that will adjust itself automatically, or at least easily, to new requirements. Whatever those requirements might be.

Sure. That’s reasonable. It’s reasonable to want that. You know what I want? I want a cordless garden hose.  How awesome wouldn’t that be?

Awesome or not, some things are impossible.  Cordless garden hoses are impossible. So is software that changes without being changed. If you need it to adapt to future requirements, then guess what: that involves adaptation – AKA change.

“Not necessarily. Not everywhere” some might say: ”Look what I’ve done here, if THIS part of the software ever needs to change, all you need to do is alter x, y and z in this xml-file, and - voilà”. Oh, really? Explain to me how altering an xml-file is not a change. A change in an xml-file is a change. A change in a database table is a change. A change in a properties file is a change.  Moving changes outside the code itself does in NO WAY stop you from having to make changes. It does however create extra complexity in the code, while limiting the types of changes you can make. It also makes it much harder to track the changes made, who made them, when were they made and by whom. So if you like the idea of allowing random changes in your production environment, with limited accountability or ability to keep track of what’s going on - by all means move all your settings to your database, or properties files. Have fun.

This idea that it is possible to create “future proof” code is harmful on so many levels. We’re solving the wrong problem. When a kid complains that he doesn’t want to brush his teeth anymore, the solution isn’t to just brush them for 5 hours straight one night, and expect to never have to brush them again. You can stop spending money on the software after it has been built, or you can have software that stays relevant. Pick one. Making a new, flexible, ultra-modern system for developing film would not have saved Kodak. Configurable horse carriages in carbon fiber would not have saved the horse carriage industry. We need to stop kidding ourselves that we can predict the future. We can’t. We can’t think of everything up front and cover all potential future use cases. Nor can we make code so flexible that it will work well in any given situation.  It’s not for lack of trying, but so far SAP – AKA the Germans’ revenge for WWII - is our most successful attempt. Sure, it‘s been a great commercial success for its makers. But it has been a complete and utter disaster for pretty much all who have to use, maintain and pay for it.

If you want a future proof system, you don’t want immortal and flexible code. You don’t want the T1000 terminator. You want Southpark’s Kenny. You need code that’s easy and fun to kill. You need to get used to killing it, often, so you can replace it with whatever you end up needing.

You don’t want configurability, you want continuous delivery. If you’re always deploying new versions of your application, you don’t need the software to be flexible.  Your process is. Software that just does one thing unapologetically is much easier to reason about and get right. It is easier to test. It is easier to change.  It is also easier for new people coming in to the team to understand. “Hard coding” implementation or even values is not a problem if you have a development process where your code is maintained and released regularly. In fact, it can lead to clearer code that’s easier to work with.

Example: You’re making a web application that handles applications for some kind of public benefit. Say parental leave (Where I’m from, if you’re employed, you’re entitled to 12 months paid parental leave, but you have to apply to get it). Your app will have a page with a form where you fill in the expected due date, your current employer and so on. Then a tester, or disgruntled user, reports that “when I click ‘submit’, I’m redirected back to the same form, instead of getting a confirmation that the application was submitted”

If you’re working on that project as a developer, what you want is to figure out why the system is behaving in this way. Firstly you want some way to easily locate the application page, and on that page, you should easily find the “Submit” button. If the page and/or button is automatically generated with parameters found in the database, you’re going to spend forever just trying to locate where that button even is. No maintainer will ever thank you for this. Once you’ve located the button, you need to find out what it does. This should not be hard to do. It should not involve searching through configuration files. It should have an easy to find onClick-event, which either contains, or sends you to the implementation. If you want your code to be easy to work with, you want to keep the number of steps required to find the implementation as low as possible. (Within reason of course. You don’t want business logic in the UI code, but you should aim for the implementation to be as close to the UI as possible. Markup -> EventHandler -> Business logic implementation.)

Many people will implore you NOT to tie the EventHandler code to a particular implementation of the business logic. Because – “WHAT IF YOU NEED TO CHANGE THE IMPLEMENTATION SOME DAY?!?!!”. You need to call an abstract interface they say. You know what I’d do if the implementation needed to change? OK, are you sitting down?  Here’s what I’d do: I’d open the file with the implementation. I’d open the file with its tests. Then (drum roll) I would change them! Studies show that you can actually change code after it has been written. Even classes that don’t implement interfaces. I know! Crazy! It works. You should try it.

I mean really, how would you do it differently if there was an interface there? If you needed to change something, you'd do EXACTLY THE SAME THING. You'd still have to locate the implementation and change it. The interface saves you no work whatsoever.

Someone smart once said that you should always “program to an interface”. But you don’t need to implement a Java or C# interface to have an interface. All classes (or other groupings of code) have an interface. Their method signatures and member variables are their interface. This interface should not be implementation specific. That’s the point. We’re not talking about java interfaces. If we were, that would mean this advice could only apply in the java, c# or other programming languages with specific things called interfaces in them. When someone tells you to put your money where your mouth is, you don’t rush to the nearest cashpoint, and put your lips over the cash dispenser and fill your mouth with money. You need to understand what the phrase actually means.

Your method names and variables should show the intent of what the class does, not how it is implemented.  Your ParentalLeaveManager (or whatever) should have methods called

submitApplication(application)

not

submitApplicationJSONOverJMS(application, queueTopic, false, 42).

You should be able to understand what the function does, without having to dive in to or learn about the implementation details. You should be able to change the implementation without altering the method signature.  That’s the point.  But none of this means you HAVE TO make an empty meaningless and annoying IParentalLeaveManager interface that your click-event can refer to.  Typical enterprise projects are littered with these pointless interfaces with only one implementation. Have you seen FizzBuzz done Enterprise style? Absolutely hillarious! It is, sadly, not far from the reality out there. I swear, I once showed an enterprise developer this project and his response was “So? There’s nothing wrong with that”. True story!  Interfaces galore are not only pointless and really annoying when trying to find out how things are implemented. They also send counter effective signals to the maintainers. Java interfaces are meant to be used when there are a bunch of implementations available, and your code wants to access them all in the same manner. Therefore, we all know that changing the method signatures of an interface is not something we choose to do lightly. Interfaces say “HANDS OFF” – don’t change unless you REALLY know what you’re doing. So while we often add interfaces to facilitate future changes, the effect is the opposite. The presence of interfaces stop the team from making necessary changes in the APIs of their business logic. Instead of altering the method signature of an interface, a maintainer with a bug to fix or a feature to add, is more likely to work AROUND the interface. Adding the new code before and after the method call to the interface. As the system grows older, more and more business logic starts seeping out of the core business logic classes, making a complete mess of everything.

Interfaces with only one implementation are the committees of code. If you don’t want to make a decision yourself, if you’re worried about being blamed if it was wrong, you delegate it to a committee.   The Peoples Front of Judea would approve of EnterpriseFizzBuzz.

Some will tell you that interfaces are great for making your code testable.  No they aren’t. They do no harm, but they don’t help either. You don’t need interfaces to create mocks or stubs or spies. Use Mockito or any other sensible mocking framework, and you can easily create mocks for concrete classes. You should also ask yourself why you need those mocks or stubs or spies – with a little rework of your code, you might be able to write tests with very little mocking:

https://gist.github.com/ChristinGorman/4bf388b7184002a07931

Unit tests should, ideally, not involve other bits of code than the unit being tested. Extensive use of mocking should be seen as a code smell if you ask me. It indicates that the system has too tight coupling.  So no – interfaces DO NOT make your code more testable. Nor do they make your code future proof.

I’ve been a bit too harsh so far on those who want systems to just stay relevant without work. There are of course several things you can do, and several things that you can avoid doing, that will make your code base more future proof without requiring code changes.

Firstly, I’d recommend adding a “manual override” option wherever possible. I once helped make a system that would handle benefit fraud. The main aim was to remove all the manual work involved – which was considerable. The system should enable the case officers to add a new bit of information about the person who’d received benefits, then the calculations would be rerun and the amount the person needed to pay us back would be calculated automatically. For most cases, this was pretty straight forward. There were a couple of typical scenarios that we were able to automate quite easily. But there were a great deal of corner cases we couldn’t handle so easily. But instead of aiming to write code that handled every possible eventuality, we added a “manual override” option to the calculations. In cases that didn’t match the standard ones, we allowed the users to revert back to their old mode of operation and enter the resulting calculation themselves. This saved us a lot of coding, and also was a great way to handle an unpredictable future.  If new rules came along, or some unexpected event happened that meant the standard cases no longer worked, users of the system could always use the manual override. Many of these unforeseen future events and corner cases are so rare, that it is not worth adding specific code to handle them. It is far more cost effective to do them manually when they occur.

Which brings me to tip number two: Mitigation over prevention. There are two sides to risk management: reducing probability and reducing impact. In software we tend to forget the latter. We add lots of fancy logic to prevent illegal input.  What happens then when the rules change?  Suddenly the system becomes unusable. In Norway our parliament voted to change our criminal laws in 2005. But they have only now (2015) been put into full effect, because the police’s computer systems prevented them from applying the new rules.

If we visualize the effect of users actions, (highlighting potential issues), if we make it easy for the users to correct their mistakes, we’ve reduced the impact of error so much that we might not have to worry about prevention. This approach is much more future safe. If we allow users to make mistakes – they will still be able to do their job even if the rules change.

The more automated your system is - the more work is required by software maintainers to keep it up to date. In these cases configurability and interfaces galore will very often only get in the way - adding complexity, confusion and preventing your maintainers from doing whatever they need to get done. The more you let your uses take control and decide how to use the system (less automation), the less maintenance work will be needed. BUT the less time you'll save your users, as the users still need to do lots of the work. Your choices often boil down to: Do you want to save the maximum amount of work for your users (fully automated system)? XOr do you want to save time on system maintenance (more manual system)? The choice is yours. You can't have both.

Avatar

What they don’t teach you about making your application code maintainable

Avatar

The most important role of all?

Why are programmers at the bottom of the software development food chain, when we’re clearly the MOST valuable players? Without us, there would be no software! We also have a good understanding of the domain we’re working on, maybe even better than the users! We are certainly the only ones capable of figuring out how to get our apps to work. (That’s a compliment, right?) If you’ve got a good team of programmers, you don’t even need any project managers or UX or anything like that.

Oh, no! Say the project managers. They are the most essential. Without them, the project will descend into chaos.  PMs ensure the project is completed as per specification and cost estimate. The coding itself is not very important, programming is a commodity that can be bought from anywhere. Besides, they know tech, they use it all the time. They can tell a good application from a bad one.  If you’ve seen one bad system, you know how not to build one. Obviously. They’re more than tech savvy enough to make decisions on how to make software.

Then come the UXers – they are obviously the most important as they are the ones who ensure that the product meets users’ needs. It doesn’t matter if the project is completed on time and on budget if the result is unusable.  UXers understand the user, they understand the basics of cost/benefits analysis and they might even have a little experience coding, like HTML and CSS. If only they get the time to make some decent UI-mockups that they test on real users in the planning phase, the project can’t fail. Hand the mockups to the code monkeys and you’re set.

Sad and ridiculous isn’t it? Everyone so convinced of their own importance and capabilities.  It would be so much better if everyone just realized the truth:

   That the programmers are right!!!

I’m only half kidding. Bear with me.  Apart from the end users (the most important group of people in any software development project), the only other absolutely essential group is the programmers. You just can’t get away from the fact that software cannot create itself. You can’t make software without people who can make software. Those gantt charts and WBSs won’t turn into apps on their own.  Those Photoshop or illustrator mockups might look pretty, but they won’t do much for your end user.  Without someone to transform your ideas into actual software, you’re not getting anywhere.

My point is not that project management is unimportant. My point is not that UX doesn’t matter. They both matter. Enormously. But these roles need not necessarily be filled by separate people.  Since you can’t get away from programmers being involved at some level, you might as well make the most of them.  Programmers are more often than not, intelligent, caring, decent human beings with opinions, thoughts, valuable insight and tons of ideas.  They are wasted potential as jira-task eating code monkeys.  Even the most awkward, badly dressed and smelly geek will have valuable insights.  We’re missing out on so much by treating programmers as interchangeable “code creation resources” locked away in the IT-department.  

“But I just can’t talk to the programmers. I can’t understand what they say” “It would be madness to give my team any responsibility. They would mess up everything. They are completely irresponsible”.  I keep hearing statements like this from the managerial classes.  Firstly, if you’re in charge of a software development process and can’t understand what your people are talking about… MAYBE, just maybe they aren’t the problem.  Maybe it’s YOUR vocabulary that needs some work.  Just a thought.  Secondly if you treat people like irresponsible children, they act like irresponsible children. If you give them some responsibility they might surprise you.  Who knows what they might be capable of? It might turn out a couple of your programmers actually deeply care about design – I don’t, but it happens – you might be able to use those people both as programmers AND designers.  You’ve just made your team members far happier, more loyal while at the same time improving their productivity, saving you both time and money.  Some might enjoy a bit of management. Some might be really good at testing. Some might have a knack for spotting future business opportunities. I’m telling you, you’re missing out big time by not finding out.

Your IT-department is a treasure trove.  If you’re making software you need programmers anyway.  At some level, you need to communicate with them too.  Once you’ve got a team gathered, maybe they can handle many of the other roles you need in the project. Like managing, reporting, designing, testing. Maybe not, but finding out makes a lot of sense.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.