Chatops and machine learning

Since I slowly becomes the chatops specialist where I work, I get to think I want more. Since the beginning we have been interacting with programs essentially with graphical interfaces. First asynchronous if we think about the web, now more and more synchronous, but they are interfaces that are not human. They are designed for giving control to the operators of those interfaces.

But the more we advance in autonomy of our programs, the more we should trust them to sort the information by priorities. The interfaces like Siri, Echo, are much more ‘human’ and conversational. It doesn’t take a genius to speculate that interfaces are going to die one day, except for very specialized usages, and more interaction will be just more conversational.

In the course of my development of interactive agents for technical needs, I noticed that adding just a little bit of intelligence and memory in those agents goes a long way in usability. Especially in chatops, a lot of the actions required from those agents are predicable and repetitive. The development of new features should follow the recognition of those patterns and shorten the path to accomplish some actions. That’s pretty much my job.

But coding this continually is not very cost effective. Tools also change, then patterns evolve. Now all I can think about is a way to design an irc bot that learns by itself. Some program that does real meta-programming and considers its commands as data rather than hard-coded pre-conceived path for the information to flow.

If you know some tools that already do that, can you fire me a mail?


Innovation and Consensus

Last week I got to check out 2 javascript frontend frameworks, Choo (the cute framwework) and Cycle.js (the streams power). It feels that nowadays React is eating the frontend world, but there is actually a lot of non-marginal alternatives. It made me think that there is something tricky with the process of innovation. It is born from disagreement. It feeds with discontent.

Those frameworks are born from the refusal to consider React as a consensus, hence it generates an alternative path, exploring different paradigms. In my opinion, and from an global point of view, it’s a sign of vivacity of an ecosystem. Diversity of species guarantees a larger span of choice for the natural selection and a faster evolution.

But consensus and normative approaches have so many virtues. Damn, this is tricky. Well, here is the catch. Unicity in the software ecosystem creates stability, which in turn creates comfort for the actors involved in the industry. Habits get stronger, mobility is easier, interoperability leads to bigger systems with long life-cycles. This is good.

On another hand diversity creates instability, challenges the developers and makes it harder to find common grounds. It favors exploration and smaller systems with shorter life-cycles. In a diverse ecosystem, paradigms and new ideas are born and die quickly. And I think it’s a much more rich ecosystem.

For some reason this duality reminds me of the cathedral and the bazaar. It’s totally unrelated, actually, but maybe there is some kind of interesting parallel in this metaphor collision. But beyond that, there is a real antagonism for developers, and an opposing interest between the development workforce and the software as a living species.

And I have the feeling that on the long run, diversity and innovation win. So you better get used to the discomfort it brings. Train yourself to jump from one framework to another. Don’t let yourself getting lazy by going to the comfortable consensus systematically. This ability to cope with change is your best hope for staying current. Because the change if the way of the evolution.


nothing to report

Yeah there are weeks like that, I don’t have inspiration. Or I’m lazy. Or I’m too busy writing javascript tests with mocha, chai and or sinon. Already having 3 times more lines of code in tests that in code and even if coverage says 80% I know for a fact that there are many more cases I need to test. Anyways, there is too much fun for me to just go beyond the traditional gathering of the links for this week. Can’t write a rant. No way.


The (in)culture of encryption

A couple weeks ago I had found out that a friend was keeping his passwords in a google sheets document. I was horrified. But he’s a normal person. I mean, not more technical than the next guy, or just a little. He’s using the web interface of gmail for his mail, like many people do (I even know very technical people doing it, which is still boggling me). I looked around and found mailvelope. So I hook him up on it and now he can use GPG.

In the past 20 years I have seen the timid evolution of personal encryption. Oh there are initiatives like Keybase, various simple tools like passowrdstore or Felony that I discovered this week. But it seems that encryption don’t really stick to the usages, unless you have a specific thought about it. Fortunately there is some wise generalization of SSL for inter-server communication, with initiatives like lets encrypt. But inter-personal communication is still wide open.

More and more it is well-known that our data is food for various corporation, governmental agencies, and dark organizations. What will it take for getting the users to claim better privacy? Will it ever happen?

I mean, yes for sure people can use the tools. But it’s cumbersome. Until encryption is embedded in our tools and services, it simply won’t spread significantly enough. There are some projects like Caliopen that try to do so. On another hand, we have seen some services like Telegram which provide such service, and even some mainstream providers like Whatsapp jump into the full-encryption train. So maybe there is hope? I still wonder what’s the part played by Facebook (which now owns Whatsapp) in that move.

The recent fight between Apple and US government was supposed to set some kind of precedent. Too bad it was aborted. But they would have complied at the end, this is my bet. Now that encryption is the only way for companies to legally keep their users safe from legal (and illegal) inquiries, maybe more will consider it?

If you have two onces of technical savyness, please stop running naked on the streets. Gear up and use encryption whenever possible.


Open code, a chance for improvement

Since I’m writing code I try to publish as much as I can as open source components. But I had occasion to work in situations where it was not possible. And I noticed some serious differences in the result.

When you publish some code on, say, Github, you can just throw it as is and be done with it. Then you merely use github as a repository provider and don’t care much about anything else. But when you begin to spend some time doing it, you notice that external contributor can bring great fixes, help detect bugs, and generally speaking make your code more valuable in itself.

But this is a two-ways road. To invite people to collaborate you need to address a certain amount of little details. Writing a decently clear README is a demonstration of politeness for any passing guest. It’s just more inviting. Making sure you have a complete enough test suite guarantees you can be sure external contributions won’t mess up existing code (if writing tests in itself was not motivating enough). Refactoring your code by following codeclimate advises will break huge methods in small pieces, making things easier to be improved. Enforcing some kind of style guide will avoid people to get confused by a non-standard code-art. (that person could be you in one year).

All those aspects, when you work at a company as the only coder on one piece of code, you don’t have that much incentive to enforce them. And I know about it because I have seen a huge lot of legacy code that was written that way. With lame tests that only purpose was to enforce code coverage without really testing much, weird code style, epic methods, no instructions. If it’s just you and a couple of friends that you see every day, it’s fine, you can deal with it. For a time.

The fact is that exposing your code brings an incentive to work on the (apparently) non-essential aspects of your code. But those aspects really bring a huge improvement on the long term. Which leads me to consider that opening source code is a way that can lead to make it better.

An usually, I noticed that the bosses don’t care if it’s open or not, as far as there is no trade secrets revealed. But well we write so much code that if business-neutral for many things. At the end of the day, it’s only the matter of asking the boss if you can free this or that code, and then it’s on its way. Even more if the code is published under an organization on github, there is even more incentive to make it clean, and it will also help possible candidates to understand what kind of stack you are dealing with, and what kind of principle you try to enforce. Even if it’s actually only enforced in your open source code and the hidden code is messy. Haha.

So, I ask you now, what in your current codebase could you extract as an open source gem? or node package?


Remote working

For some random strange reason I had a lot of links this week about remote working websites. I know it’s a very real topic for our craft. But I still see a lot of companies that have hard time coping with the concept.

On one hand, there is a shortage of technically skilled staff. The growth of the tech industry and especially the online services is way too fast for the education system to catch up. It’s been like that for pretty much 20 years now. And there is an unbalanced repartition between where the growing companies are and where the growing population of techies are. So it would only make sense that either relocation would be much easier or remote working much more widespread, no? Well, no. Relocation is bound to laws that are not driven by the only technical area. And remoting is dependent on a cultural and also legal shift.

Because working with remote staff, many of us know what it means. First it involves a level of trust in the staff that is unprecedented. The same level of trust you need to invest when working with a contractor, actually. Then the old-fashioned control and command system in place with traditional management systems gets totally inefficient.

More even, to be efficient with a remote team means adopting a work organization and tools that are specifically thought for this setup. From what I can experience, it’s either one or the other. When half the staff works in an office locally it’s not easy (yet not impossible) to be efficient with another half remote.

Some stuff will be likely to happen locally with old ways of oral communication and meetings with sparse note-taking. That will put the remote staff in the dark of some parts of the internal process. It may create a double-speed kind of organization, there is the ones that are full speed and the ones that are just, well, not in the core. Oh it’s still possible to make it work. It’s just harder.

Or maybe it’s like the web and the mobile. Slowly companies need to become remote-first and then eventually also work on local office-based scale. But there are still various barriers for having a real remote-friendly world anyways. The legal status is not clear,m for example, money transfers between countries, compliances with social coverage conventions, no international law seems to cover international employment with individuals.

From what I can understand, remote working from a distant country is still a hack, legally speaking. Companies need to find tricks. Luckily, hacking is something we are not bad at. But still. I wonder when it’s going to change, and see a legal ground for a really normal remote working context.


The virtues of duplication

Few weeks ago I began to prepare a copy of the Green Ruby Template system for the usage of the Remote Meetup team. It’s kind of ironic because, from some point of view, this code is a sin and was not written in the perspective to be generic. It’s deliberately not constrained to code best practices, it’s joyfully messy and blatantly suboptimal. It was a quick and dirty scripting solution, it could have been a set of shell scripts, well it happens to be using ruby. Check it out if you don’t believe me.

But it’s doing the job for years now. It’s a builder code, so it’s run as a convenience only a few times a week, it doesn’t really need to be fast. It just needs to do the job. Trust me I like good code, with clean design and full test coverage. But this one was just an intimate assistant of mine which was not really a software. Just some automation scripts.

And now here it is, I get to face a situation where some friends need the same setup and I can’t just give them the code, it’s so custom. But there have been only a few changes to make and it was ready. But the interesting part is in the process. While duplicating the code for the Remote Meetup newsletter, well, I extracted some stuff, made a config file to remove various hardcoded things.

Well it is still a big ball of dirty code, but in the duplication, it got more generic. I love that feeling which brings the software development world closer from the biological world. There is some kind of evolutionary process going on in the life of a software. It takes many forms and I like it when I get reminded of those similarities. I could go on and on about the topics that an open source ecosystem is necessary for the diversity of code to flourish and make evolution possible in a totally Darwinian way.

So this simple operation was just illustrating one principle: when you share your code you shape it and make it more generic in the process. It can have various beneficial side effects beyond the single act of duplication and adaptation. I find it’s also true when you publish your code as an open source project. If it gets some traction and people start to use it, they will import their context in your initial ecosystem and bring the same kind of adjustments. Making it stronger, in some way.

Anyways, the Remote Meetup News website and newsletter generator is now ready, and you may find that the design is kind of familiar. Well, the rule of the path of least resistance also apply here for sure. I begin to apply back on Green Ruby the changes I made over there. I suspect the third duplication, of any, will be the extraction of the common parts in a separate codebase, like a gem with a lib.


Playing with crystal

Last week I went to a remote meetup of Paris.rb (fr). Well, it was at 1am in my timezone, but I wanted to check how remote meetups can go and there was some presentation about crystal and about kemal. It was a great moment (video is online if you can understand french), and gave me the push to give crystal a try. It was low on my todo list but it was there, waiting for the proper conditions.

That’s pretty much the main thing that I got out of it. Attending to social activities is providing once again a great push to move forward. It’s not about what you learn (which still can be valuable), it’s not about networking with people (even if it can be priceless), it’s all about the personal alchemy that brings you on your edge and keeps you hungry for more. Well, that’s how it works for me, at least.

So I had a look at Crystal, and played a bit with Kemal, and I’m very happy with the result. I had some attempt to check Elixir too but crystal felt really much closer to ruby. The main difference being the variable typing, the stdlib that includes some modern stuff like websockets or oauth2, and the compile step (which in some case can be a bit taxing). But the speed gain is phenomenal. I suspect it would make some sense, in a scalability strategy, to think about porting ruby code to crystal when perfs become an issue (not sure how it would apply to complex rails app though).

Okay yeah Crystal is still very young. But it’s getting traction, I bet it has a bright future ahead.


Scarcity and abundance

As you may remember, this newsletter is using the generous free plan from Mailchimp. But it has limits. Only 2000 emails ca subscribe to that newsletter. It’s already a great gift, and I’m pretty sure it’s a good business calculation for them. Now GreenRuby reaches 1915 subscriptions. Which means we need to address that so pervasive concept of scarcity.

I can remember when it shifted. With the first web pages in 96. When tables were introduced in HTML, then it became less obvious how to recreate something you saw, and having the possibility to browse the source of a page gives you the exact recipe on how it’s made. So you can copy from it. I think open source would not have had such a large adoption if there was not that idea with the HTML, that you need to have access to the source to learn, reproduce and improve.

Also, it made even more obvious one of the key aspects of the internet age: by dematerializing market goods (ie. with introduction of softwares), they became reproducible at a cost that is marginal enough to be forgotten (yeah, bandwidth and storage are not free, for sure). When you give it to someone, you still have it. Not like that glass of beer. It placed us in an awkward paradigm, the world of abundance.

There have been so many efforts to artificially bend the internet to a world of scarcity again. See, without scarcity, there is no economy as we know it. There is another kind of economy, though, but the big guys that lead the old one are not ready to let it go. For the old economy to work, things have to be scarce. Otherwise there is no competition to obtain commodities, no motivation to work like crazy to push forward the progress of production. In abundance economy, also known as gift economy, people get less likely to be controlled and they don’t want to work hard, they want pleasure and satisfaction.

Honestly, I see the efforts made to control a resource that is naturally abundant, by using the tools of law, copyright laws, patents, and all that kind. I can’t help thinking about the abundance of material goods. Technological progress, after war, promised some kind of abundant society. Work would be automated so we would have less and less to work and just enjoy the benefit of the global growth of humanity in taming the material world. Well, we are far from it, and I’m instinctively convinced that it’s by design. And it makes me sad. People still need jobs, society won’t provide for them. There is nothing like common goods in that humanity. And now human compete with machine for jobs, whereas they should have been allies. Sad, really.

But this led me quite far away from my initial topic: we got limited seats for this newsletter, so we’ll do it 2 ways. First, I will send a mail to all people that never click on any links, ask them if they want to stay there. Then after a time I will unsubscribe the ones that stay silent and inactive. That could skim some 300 people maybe. At the rate it goes, that can buy us some 6 months, at the current rate of newcomers each week.

Then, the subscription will be closed, unless I setup another publication system, self-hosted. We don’t really need a fully-blown solution like mailchimp to be totally honest. They do bit efforts to have mail servers that are compliant with various anti-spam techniques, and this is a great thing. But I bet I should be able to match it on a self-hosted server that would cost me less than $10 a month. It would have no limit, at least. Maybe at that time I will open some kind of donation program. Actually I already pay for the hosting but I use my servers for other usages so it’s only a small fraction. And honestly I could even host GreenRuby web pages on github pages for free. But I would not have access to the logs for analytics and would need to setup some piwik because I can’t cope with the idea of using GA. Or, I just don’t care about traffic metrics. That’s tempting. But I got lost in my train of thoughts here.

Bottom line is: we won’t get trapped by scarcity. Muahahaha. But feel free to send me feedbacks on this topic if you have any thoughts.


GreenRuby IRL and Remote meetup

Last week I posted a link about Remote Meetups, but, as it sometimes happens, it didn’t stop there. The basic principle was appealing to me. It’s true that we don’t all live in the Bay area or in New York. Having high quality speakers in meetups is hard when you live in a small city or a remote country. And this is exactly what that initiative tries to address.

So I jumped in and had some talk with Franze. The result is that the GreenRuby meetup we will have this friday will also be remote. Some people will attend physically, some people will come virtually using the Bigmarker platform on the RemoteMeetup account.

Those events usually are based on a format including a presentation followed by interactions. My plan is to go nuts and try a full-social format, with no presentation at all, jumping directly to the interactions. It’s going to be highly experimental and may lead to failure, but it certainly won’t kill any kitten in the process so I think we are safe.

The mix between physical and remote event is the challenging part. We may try the usage of mobile phones to make local people become remote participants (but it seems to be IOs only for now). I will get an iPad ready for easier floating access, that can be fun.

You are welcome to join, either online or irl. I heard Gandi, the physical host and my employer, is going to fill up the fridge with hundred of various beverages and won’t mind us to help reduce that quantity.