Interview-based knowledge sharing

For years I noticed the difficulty to extract decent information from developers and craftsmen about their work. Developers clearly lack the skill or the taste for documentation. It extends to the specifications, which also provides an occasion to notice bad performance.

To work this around, specialized project managers have to fill this role, but it can create a gap between developers ownership and the final result. Sometimes luck creates the inspiration and developers produce a decent amount of documentation. But honestly, this is pure luck, and not a rule.

I found a way to mitigate this issue, that I began to experiment some time ago at work. It’s based on a recorded interaction between a project person and a person of the craft.

Here are the rules:

  • the project person organizes interviews with stakeholders from all specialties including developers, business people, operations people, sometimes users, the more diverse population possible. But it can also be done for no reason for sharing a certain kind of knowledge with an expert.
  • interviews take place by written interactive communication in a chat, in a way that enables logging (irc, slack, whatever)
  • it lasts 20 minutes more or less, but the format can extend to hours if the need is felt
  • once the interview is finished, the interviewer cleans up the logs, removes out of topic details, fixes typoes, removes elements that are purely belonging to the chat way of communication
  • the cleaned log is reviewed by the interviewee for consensual agreement on its publication
  • the log can then be added to whatever project space is dedicated to the topic at hand (documentation, specification annexes, study, paper)

There are various beneficial side effects to this endeavor:

  • actors have a better feeling of engagement in the project or topic at hand
  • it creates a bond between actors, facilitates future exchanges
  • it gives an equal chance for everyone to speak, even the shy ones, because the 1 to 1 context is much less frightening. In a meeting, there are people that never speak. Are they stupid? they are not!
  • it creates content that can be shared with other actors so that they have a better chance to understand the point of views of other parties.
  • it creates a useful reference for the project, a raw material which can be annexed and can be used for summaries. It also creates more content for an eventual search engine if the publication space has one.
  • there is less risk to cite someone out of context because the full context is provided.
  • it is much easier to organize interviews or hold them on the fly than setup meetings.
  • there is no feeling of loss of time like in a meeting: the time dedicated is intense and very interactive, there is no waste.

I had the occasion to try that technique in a context where internal communication is not optimal, for knowledge sharing at first, then for exploring options on a new project and set up specifications. I have been happy to notice that this formula creates pretty good output and generates a nice feeling in all parties involved.

Certainly more study will come on this approach. It’s a lot of fun.


Observability and Digestibility

This is a word I love. I found it again in a recent blog post about system blindness and it reminded me how critical this need is. Our systems get more and more numerous and small. The reliability and debugging of a platform now goes into various loops given the multiplication of actors.

Observability should be one core pre-requisite when designing a service oriented architecture with micro-services. But just having everything plugged to some ELK is not going to help that much. I feel that there is a new job in there. Some function that has to be fulfilled. Something to reduce that vast amount of data into something that makes sense. An intermediary that will correlate logs from various sources. It would put them together and reduce them to some meaningful ‘events’.

So I think observability is not enough. Digestibility is what makes observability worth it. Maybe such tools already exist? Hmm, probably in the containers worlds there is something like that. Is there not?


The virtues of duplication

Few weeks ago I began to prepare a copy of the Green Ruby Template system for the usage of the Remote Meetup team. It’s kind of ironic because, from some point of view, this code is a sin and was not written in the perspective to be generic. It’s deliberately not constrained to code best practices, it’s joyfully messy and blatantly suboptimal. It was a quick and dirty scripting solution, it could have been a set of shell scripts, well it happens to be using ruby. Check it out if you don’t believe me.

But it’s doing the job for years now. It’s a builder code, so it’s run as a convenience only a few times a week, it doesn’t really need to be fast. It just needs to do the job. Trust me I like good code, with clean design and full test coverage. But this one was just an intimate assistant of mine which was not really a software. Just some automation scripts.

And now here it is, I get to face a situation where some friends need the same setup and I can’t just give them the code, it’s so custom. But there have been only a few changes to make and it was ready. But the interesting part is in the process. While duplicating the code for the Remote Meetup newsletter, well, I extracted some stuff, made a config file to remove various hardcoded things.

Well it is still a big ball of dirty code, but in the duplication, it got more generic. I love that feeling which brings the software development world closer from the biological world. There is some kind of evolutionary process going on in the life of a software. It takes many forms and I like it when I get reminded of those similarities. I could go on and on about the topics that an open source ecosystem is necessary for the diversity of code to flourish and make evolution possible in a totally Darwinian way.

So this simple operation was just illustrating one principle: when you share your code you shape it and make it more generic in the process. It can have various beneficial side effects beyond the single act of duplication and adaptation. I find it’s also true when you publish your code as an open source project. If it gets some traction and people start to use it, they will import their context in your initial ecosystem and bring the same kind of adjustments. Making it stronger, in some way.

Anyways, the Remote Meetup News website and newsletter generator is now ready, and you may find that the design is kind of familiar. Well, the rule of the path of least resistance also apply here for sure. I begin to apply back on Green Ruby the changes I made over there. I suspect the third duplication, of any, will be the extraction of the common parts in a separate codebase, like a gem with a lib.



That link to Shoutcloud made me laugh and then made me think. It’s not the first time I see some micro-service publicly available. 2 years ago there was some talk about nano-services as an antipattern. But when you push the logic a little further, and at a very large scale, maybe it’s a projection of what the future will be.

Imagine our software totally destructured, calling functions taht are stored on the net, using some load balanced worldwide environment. We already do that with CDNs. Javascript next Modules proposals will go in that direction as well. But what is a method call in a program that we know today could become a service call of an external globally available function.

After all we always write the same code. How many time did you write a regexp for email pattern validation? The RFC 822 and 5322 are nasties, yeah. If we had no latency consideration, I would gladly delegate various pieces of code to a specialized service. But latency, is it really an issue now? We work more and more with async code, with queues and messages. What seems heretic for our current legacy standards would not seem that foolish in a slightly different context.

So technically, I suppose nanoservices are a possible future. I even think it’s a requirement for scaling any kind of agent-based architecture. Machine learning will be much better off by just registering maps to knowledge than knowledge itself. But I wonder about the economical side of things. The old capitalist market economy is already stretching its reach far beyond its original statement with immaterial economy. The totally destructured immaterial one will certainly propose an interesting challenge.


About tests and documentation

This aspect of development, called Documentation, is the source of various frustrations. It’s hard to get do it, but why? My feeling is that it’s like testing. When you begin your craft as a coder, all what matters is the code. It’s only after some iterations that some non-code aspects come back bite you in the neck. Like, huho, now that I need to refactor, I really should use something that tells me if all still works. Tests become an early necessity to anyone who knows that kind of truth. If you wait until the end of a coding cycle to write those, the task is huge and it cuts you from your productivity cycle. Write them early, along the flow, is way easier.

So I think documentation follows the same pattern. So many software projects are badly documented because this aspect is postponed until it’s needed. Means at release stage. In early stages, you work on a prototype and you don’t need to explain how things work or are supposed to work. And when it’s released, there is usually some other task waiting and it’s hard to stop everything to get back and document things properly. It may be a flaw in the agile process, but it may also be a feature: if you don’t document along the way, you won’t document much.

Personally I try to consider the documentation of any of the source code I write as one of the first tasks. There are various tricks that can help in that perspective, like readme-driven development, or including the doc inside the code with yarddoc or apipie, or coupling documentation with tests with rspec or RSpec API Doc Generator. But honestly I prefer edited documentation that can follow a structure that is thought as documentation rather than merely an automated output of some code.

There are various tools to organize edited documentation, like ReadTheDoc which is in python. Another project appeared last week from the guys at Pluralsight named hack.guide(). That’s a community project but actually being totally open sourced, I was thinking the documentation building CMS they made could pretty much fill the gap of a light-weight RTD, with the benefits of having a UI for editors. Too bad we don’t have anything like this in ruby (or do we?).


The future of under-engineering

Recently Marcelo told me, that’s weird, how we do 10% research and specification, 40% implementation and 50% debugging in this industry. I’m more used to 70% research and specification, 20% implementation and 10% debugging. He was working in the hardhware industry for a while, and just came to a service-based company. This is actually a very interesting remark and it reminded me when I was in my twenties when I was working as a construction worker.

When I was young there was no internet and I had a 10 years break from computers. I had to take stupid jobs like working on construction sites for low wages. After that I went to art school and later on I worked on building sets for business shows. I have been shocked by the gap between those 2 worlds. When building a house, there is so much time spent writing plans, thinking things in advance. While in the show-business construction pattern, it was mostly about improvisation and managing inflexible time constraints, with one-time-use construction.

I feel there is the same gap in the software industry. Well it’s not exactly the same for sure, but the paradigm feels alike. In service software production, SaaS or ISP businesses, we tend to under-engineer the production. There are perfectly legitimate reasons for that, the life-cycle of a platform of service is quick, volatile and the value is not in the software asset but in the customer-user experience.

The Agile organization model reinforces this pattern, by providing a substitute to the early specifications, in the form of user experiences description. All this is fine and good. For a time. But with years passing we can see so many occurrences of ‘temporary’ projects becoming indestructible legacy monsters. It’s like there was some kind of tipping point where the development should shift from under-engineered to well-engineered but it’s rarely anticipated properly enough.

But it’s pretty hard to address that kind of problem. Throwing away the early instances is very costly, especially when the organization is shaped by a fast-paced reactive production model. Introducing proper engineering at early stages is also not a clever option, as the product has to adapt to the service, which depends on a constant feedback loop with the users.

I have the feeling that there is something missing. Like an evolution of agile that could include seeds of later engineering. Some way to make possible to start fast, and evolve in a solid and slower model later on without crisis or disruption. This is the perspective that I think was missing in that article I cited on green ruby 145. But I don’t know the answer to that problem. I suspect it will emerge by itself in the few next years.


That micro-service thing

For a while now, and more even since the rise of docker, it becomes a trend to split applications in parts and approach them as a collection of micro-services. This is not exactly new, I remember in 2002 having seen various applications based on this concept. But they had shortcomings. Development was harder and it imported a whole bunch of increased complexity because there was a lot of moving parts.

In a project that I have the occasion of following, I can watch the migration from monolith to micro-service and I can tell you, the architecture change is not simple technical decision. By splitting application there is a whole lot of application aspects that move out of the area of the developers team and are now the responsibility of the infrastructure team. The shift cannot be taken lightly.

From what I observed, the switch to micro-services can only be efficient if there was already a shift to a real devops organization. It means that the development and the infrastructure are more tightly coupled. Otherwise, it’s just a mess. The QA also can get crazy, and the networking layer gets increased complexity (or even dramatic latencies). Errors and services resilience also need an extra layer of attention.

Don’t move to micro-services if you are not ready for it, seriously, it can end up by shooting yourself in the foot.


The dimensions of coding

Today while wandering around in my weekly hunt for good links, my eye has been attracted by a post named Coding is three dimensional. It’s quite an interesting way to consider it. But the reason why it struck me is that it was missing the fourth dimension. That makes all the difference when you get years of coding. You know that time is a parameter.

Code don’t exist out of time. It has a past, perspective of a future, that both shape its current morphology. There are a lot of efforts to produce code analysis. But the real analyst is an historian and needs a systemic approach that includes time as a factor. We are still far from being able to automate that. In some ways, it’s a good news, we won’t be replaced by small scripts very soon.

The time factor is actually the essential element in the technical debt formula. Purist coders can’t cope with technical debt but if you have two onces of business man inside, it makes total sense. The tradeoff in technical quality versus fast deliverability only makes sense because the timing is critical. A mess is not a technical debt.

If coding was disconnected from the market, and if it was not a business or more like an art, maybe time would not be that critical. But even open source software is dependent on the market at one point or another. I fail to see how it could be different.

Honestly, I would prefer clean coding and no market tradeoff, but that’s just a dream.


Javascript and thoughts on programming

Recently I’ve been playing with Hubot plugins code in coffeescript for our company. That’s a while I didn’t do much js but I used it for a long time and I didn’t find it too difficult to catch up. But for some reason, it brought me the same feeling I have each time I get back to that language. I feel dirty. Still I can do what I need to do, for sure, but I don’t feel like a builder, more like an acrobat. And I’m far from a purist.

Some people talk about javascript taking over the world. But that language imho just was there at the right place at the right time. By having a runtime embedded in browsers and browsers libraries, it has been used to hijack the most used software on our computers and mobile devices, to transform them into richer clients. Along the way various layers were added to fulfill the need of software design, because javascript initial goal was merely DOM manipulation.

The thing that always stroke me the most with javascript is that despite the efforts from ecmascript, it has no formal standard or documentation. It is pretty extensively documented, of course, but because the language is pushed forward by the implementations rather than from a standard body, it gets a bit messy.

I saw a drawing this week that illustrates the mess quite well. It feels like Javascript is waiting for something to come replace it.

But there is hope with ES6. Seems like in recent years the normative effort on the ecmascript standard got some more traction and some more press coverage. But I personally don’t think it’s going to bring the solution. I enjoyed reading the thoughts of Eric Elliot on that topic but I’m not sure he’s right on everything. But what he’s right about, is that there will be an after-javascript.

Unless that after javascript doesn’t arrive fast enough and gets useless by some new programming paradigm that may appear one day soon. I mean, in the next 10 years. At some point, like big data is too complex to be handled by human, programming will also get too complex and will be handled by algorithms. We already see it coming. And all programmers will then become high end workflow designers or just play with antiquities.

In this perspective, I think javascript is a great intermediary technology for the time being, given its pervasive aspect. It’s far from satisfying, but it does the job. But the younger generation should keep an eye on higher level abstract approaches, like systems architectures, workflow logics, organization patterns, because imho that will drive software design in the next 10-20 years.


The yin and yang of software development

The topic I talked about last week led me to think about it more widely. And I ended up with the thinking that many problems in software companies are a clear problem of balance between their yin and their yang.

This old chinese principle is documented in a very old-fashioned way, opposing genders and principles. But actually it sums up in the fact that many dynamics are to be based in a balance between two opposing principles. Otherwise they fail.

The way I see it, software developers are a nurturing kind. This profile has to consider long term. It decides actions for later outcomes. It’s about giving life and growing it. It feels closer to the Yin principle.

On another hand, the business people are bound to a shorter time frame. And I don’t talk about the entrepreneurs and the rare visionary people, but the real business work force. They are competitive, aggressive, fighters. That really feels to me like the Yang concept.

And all occurrences where I saw software companies failing, I think it was because there was a lack of balance between those 2 principles. Either the management was too soft and not aggressive enough towards its market, either it was too aggressive and nurturing was not considered enough in their equation.

I don’t think that this balance requirement applies to everything, to be honest. But in a constituted body of a software organization, considering the current (questionable) market economy, it feels that the Yin and the Yang have to be in balance to grant a chance of survival to the organization.

One may have the feeling that the dominant Yang (business side) is the more common case. But they are just more noisy. Many projects stay silently in the darkness just because there was no real business consideration (or even refusal of it).

The keynote of Grady Booch (linked in the video section) confirmed me in various ways in this opinion. Engineers have the duty to fight for the balance when they can. They have to understand that it’s not a one-way deal, as well. If you want to exercise programming in a nurturing-only context, win a lottery and dedicate your time writing free software (where market requirements don’t apply). But in the usual case, you may have to consider if you are in a balanced context, and if not, try to work on balancing it.