Do I want Windows 10?

I sat down in my office this morning and found a new icon in the system tray notification area of my Windows 7 Enterprise desktop. Right-clicking it showed four options, none of which was “Exit.” Left-clicking it brought up this window…

I’m not sure when Microsoft installed this program, but it must have been last week when the Tuesday update batch hit. None of the actions in the window appealed to me this morning. I don’t know what it means to “reserve” my free upgrade, and I am still not sold on Windows 10. Since I couldn’t get rid of the program (at least not without hunting down the process and killing it, after which it would in all likelihood return on the next restart) I used the notifications manager to hide it.

It’s not that I’m uninterested in Windows 10. On the contrary I’ve been very interested in all of Microsoft’s most recent decisions with their ecosystem, including the move to open source .NET and make the CLR portable across systems. And the truth is that I was a Microsoft platform developer for twenty years. It’s safe to say I have never before been two whole versions behind the current release of the operating system. Why, then, am I still running Windows 7? Should I upgrade to Windows 10?

I’m still running Windows 7 because Windows 8 sucked hard, in my opinion. I installed it. I used it. My wife has it on her laptop and I tried to help her a bit during the acclimation phase, and I thought it was horrible. I’m a software developer and the interface of Windows 8 did nothing other than make everything I already knew how to do cumbersome and difficult, with no compensating benefit that I could detect. I don’t have two 24″ monitors so I can cover them with big tiles.

I felt pretty much the same way about Windows Vista, but for different and more technical reasons. Windows Vista just wasn’t ready. Windows 8 just didn’t make any sense, ready or not. However Windows Vista became Windows 7, easily the best version of the OS that I have used, and a pretty high bar which Windows 8 certainly did not manage to leap over, at least in my view. Now that Windows 8 is becoming Windows 10 is it time to switch?

Even without thinking too deeply about it I’m biased against. The main reason is the deprecation of Windows Media Center. Microsoft is giving up on the desktop entertainment functions of the PC, apparently. But for me WMC has long been my Netflix solution of choice. It looks and works great and my ten year-old Firefly RF remote works awesomely with it. I realize, unfortunately, that this technology is aging, so maybe I should just get over it. What else does Windows 10 offer me, other than correcting the things people saw as mistakes in Windows 8? Let’s have a look at their news release. What are the big features a professional developer like me should care about?

Cortana, the world’s first truly personal digital assistant helps you get things done. Cortana learns your preferences to provide relevant recommendations, fast access to information, and important reminders. Interaction is natural and easy via talking or typing. And the Cortana experience works not just on your PC, but can notify and help you on your smartphone too.

Awesome, a digital assistant. I don’t really need one on my desktop. It might be useful in a mobile context, but I don’t run Windows Phone and I am not likely to anytime soon. Then again my kids all have iPhones with Siri, and I don’t hear them talking to their phones. As far as I know they don’t use Siri for anything. Maybe if I went to San Jose I’d encounter lots of people asking their phones to do things, but I just don’t see it happening around here, at least not in public.

Microsoft Edge, is an all-new browser designed to get things done online in new ways, with built-in commenting on the web – via typing or inking — sharing comments, and a reading view that makes reading web sites much faster and easier. With Cortana integrated, Microsoft Edge offers quick results and content based on your interests and preferences. Fast, streamlined and personal, you can focus on just the content that matters to you and actively engage with the web.

Well, web apps are still a big part of what I do so I will be getting familiar with Edge whether I want to or not. Am I excited about getting Windows 10 so Edge can replace my current browser? Let’s see… it has Cortana integrated… w00t. See above. And a new “reading view!” I’ll be keeping an eye on Edge in terms of standards compliance and performance, of course, and I will be testing web apps on it, but if those are the big draw features I’ll continue to bounce back and forth between Chrome and Firefox as one or the other alternately pisses me off.

Office on Windows: In addition to the Office 2016 full featured desktop suite, Windows 10 users will be able to experience new universal Windows applications for Word, Excel, and PowerPoint, all available separately. These offer a consistent, touch-first experience across a range of devices to increase you productivity …

I’m not even going to bother with that whole quote. I love Windows, but if there was ever a good reason to hate Windows it would have to be related to Office somehow. From a word processor so horribly complicated that no living human can enumerate more than 10% of the feature set to an email cum personal productivity tool that set a new standard for how long a legacy code base can continue to be crammed into ever more ill-fitting skin, there is literally nothing about Office that I like. Been using Google Docs for years and the words “touch-first experience” in the quote above certainly don’t give me any reason to rethink that choice. Yuck.

Xbox Live and the integrated Xbox App bring new game experiences to Windows 10. Xbox on Windows 10 brings the expansive Xbox Live gaming network to both Windows 10 PCs and tablets. Communicate with your friends on Windows 10 PCs and Xbox One – while playing any PC game.

Ok, that’s fine. I’m not a console gamer, and I don’t own an XBox, but this is still pretty cool and if I were a console gamer, or was willing to purchase an XBox to replace Windows Media Center, this might be exciting.

New Photos, Videos, Music, Maps, People, Mail & Calendar apps have updated designs that look and feel familiar from app to app and device to device.  You can start something on one device and continue it on another since your content is stored on and synched through OneDrive.

Wow, now this is what I was waiting for. Not. There are better services available now for all this stuff. Definitely will be meaningful to the thousands of people on Windows Phone, though. Next.

Windows Continuum enables today’s best laptops and 2-in-1 devices to elegantly transform from one form factor to the other, enabling smooth transitions of your tablet into a PC, and back. And new Windows phones with Continuum can be connected to a monitor, mouse and physical keyboard to make your phone work like a PC.

I’m not writing this off. Device convergence has been talked about for a long time, and I certainly hope somebody is able to make it happen. The vision of being able to use one device across different inputs and display form factors is compelling. But the problem Microsoft has is they don’t have the dynamic mobile ecosystem to pull this off and make it relevant to lots of people. Jury is out, but give them credit for the attempt anyway.

Windows Hello, greets you by name and with a smile, letting you log in without a password and providing instant, more secure access to your Windows 10 devices. With Windows Hello, biometric authentication is easy with your face, iris, or finger, providing instant recognition.

Same thing. Not writing it off, despite the ridiculous name. Also not relevant to me sitting here at my desk. So like the Continuum feature it is interesting, but provides no motivation to move me off 7 for my work system.

Windows Store, with easy install and uninstall of trusted applications, supported by the broadest range of global payment methods.

When lots of people are running Windows on their phones this will be a very compelling offering. Which is just a bit like saying that when I win the Powerball I will be living in a beachfront home in Santa Barbara. I understand Microsoft’s dilemma, and I don’t envy them. They have to be relevant to mobile, even though they are barely relevant to mobile. You can’t build the future of your business on desktop computing, even though most of the people who use your current product use it at a desktop, for work.

It’s a tough situation, but the reality for me is that reading through this feature list doesn’t make me wonder whether I should upgrade to Windows 10 on my desktop: it makes me wonder why I am still running Windows on my desktop at all. I also have an Ubuntu development box and switching would be pretty painless for me. The answer is: games and Steam, and to a lesser extent a huge file called outlook.pst. I play games like Battlefield 4 and Planetside 2, and I like a mouse and keyboard for that. And I have ten years of history in my outlook file. I never look at it, but for some reason I haven’t been able to just delete it. If I ever get to that point and also stop playing shooters (which I should do since I basically just get slaughtered by teenagers) that will probably be it for Windows.

So the answer to the original question I posed to myself in the title appears to still be “no.” I’ll be giving Windows 10 a look in a VM at some point, and at some other point, hopefully still well into the future, I am going to be faced with the fact that I just can’t continue running Windows 7. I suspect that what will happen then is my Ubuntu and Windows machines will switch roles. Instead of having Windows drive my two monitors and using NX to access Ubuntu in a window it will be the other way around, and I will be switching into Windows every now and then just to play a game, or to actually try and find something in that huge Outlook file.

Microsoft .NET coreclr on Ubuntu

I’ve said for years that .NET should be open sourced and cross-platform, and that development is finally taking place. Today Microsoft announced a preview of coreclr running on Ubuntu, and this evening I was able to build it on Ubuntu 14.04 running in a docker container.

HelloWorld.exe on Ubuntu 14.04 in a docker container

There were quite a few steps involved, and as this is a preview there is also quite a bit still missing. Notably compilation of managed code on linux (using roslyn) is not available, so after building the coreclr on Ubuntu you have to pop into Windows and build it, and the corefx libraries, then copy a bunch of crap over to your linux system. You also still need mono for some callable wrappers, and nuget to grab a bunch of dependent packages. Still, all in all it feels fairly historic, and coming on the heels of Microsoft’s announcement of their new cross-platform code editor I’d say it’s been a good week for them and those of us who are fans of their tools (whatever platform we find ourselves working on).

If you want to try it yourself the ubuntu:14.04 docker image is a good starting point. Note that you’ll want to install both wget and curl before following the instructions to get coreclr running.

 

IBM Research report on performance of Linux containers

At Knowledge In Practice we were pretty early adopters of Docker, and after more than six months of use nearly all of our production services are now deployed to Amazon’s EC2 as linux containers. While the lower overhead of containers was a draw,  as a small team the main benefits for us have been ease of deployment and increased environmental stability due to the use of Docker build files to declaratively specify the content of each service’s run-time environment. Launching a new instance of a service is literally as easy as adding one line to the cloudinit script for the instance, then running “docker pull” to get the image we want, and “docker run” to get the container going. Those steps could easily be automated as well. It’s a workflow that’s hard to beat.

Late last month IBM Research released a paper (PDF) comparing the performance of linux containers vs. traditional types of hardware and software virtualization. Not surprisingly containers fare quite well, although the paper notes that both VMs and containers need to be fine-tuned for high I/O workloads. Section 2.3 of the paper provides an excellent quick overview of how containers are implemented in linux using kernel namespaces and cgroups, and in fact I found that part of the document more valuable than the performance comparisons. Well worth a scan, at least, if you have an interest in this technology.

Bunch of Yahoos

Having a great set of developer tools can help make your platform ubiquitous and loved. When Microsoft first launched its Developer Network it revolutionized the way programmers got access to their operating systems, tools, and documentation. They successfully migrated that set of resources to the web and it remains invaluable for Windows developers. If you’ve ever set up access to a Google API, or deployed a set of EC2 resources on Amazon’s AWS cloud infrastructure, you know how impactful a clean, functional web interface is.

By the same token, a clunky, dysfunctional interface can make a platform loathed and avoided. Take Yahoo’s Developer Network and their BOSS premium APIs, for example.

We’re working on a system that needs to geolocate placenames in blocks of free text. This isn’t a trivial problem. There’s been a lot of work done on it, and we’ve explored most of it. During that exploration we wanted to try Yahoo’s PlaceSpotter API. It’s a pay service, but if it works well the cost could be reasonable, and just because we have built our system on free and open-source components doesn’t mean we won’t pay for something if it improves our business.

With that in mind I set out to test it, just as I had previously set out to test Google’s Places API. In that experiment I simply created a Google application under my user name, grabbed the creds, and wrote a python wrapper in about five minutes to submit text queries and print out the results. That’s my idea of a test.

In order to test Yahoo’s PlaceSpotter I needed access to the BOSS API. To get access to the BOSS API I needed to create a developer account. Ok, that’s not an issue. I will happily create a developer account. To create a developer account, it turns out, requires a bunch of personal info, including an active mobile number. Ok, I’ll do that too, albeit not quite as happily because all I want to do is figure out if this thing is worth exploring.

I should note that there is a free way to get to the same Geo data that BOSS uses, and the same functionality, through YQL queries. Maybe I was shooting myself in the foot right from the beginning, but I had no experience with YQL, I needed to move quickly and make some decisions, and I just wanted an API I could fling http requests at. Since the billing is per 1000 queries I had no problem paying for the first 1000 to test with. Not that big a deal.

After creating the account, during which I had to change the user name four times because of the cryptic message that it was “inappropriate” (no, I was not trying to use b1tch as a user name, or anything else objectionable), I finally ended up on a control panel-ish account dashboard. There I could retrieve my OAUTH key (ugh) and other important stuff, and activate access to the BOSS API.

I clicked the button to activate the API, and the panel changed to display another button labelled “BOSS Setup.” Next to that was a red rectangle stating that access to BOSS was not enabled because billing had not been set up. It wasn’t obvious to me that in order to set up billing you have to click “BOSS Setup.” I assumed billing would be at the account level. Well there are billing options at the account level! They’re not the ones you want, and unless the verbiage triggers some warnings as it did for me you might just go ahead and set up your credit card there, only to find it didn’t help.

Not to be deterred, I googled a bit and found that, indeed, I had to click “BOSS Setup.” It would have been nice if they had mentioned that in the red-colored billing alert. So I clicked, entered my login again because, you know, I was using the account control panel and so obviously might be an impostor, and ultimately found the place to enter my payment information. Once that was done, submitted, and authorized I received a confirmation and invoice in my email. Now, I could finally toss a few queries at the API.

Except no. When I returned to the account dashboard the same red-colored billing alert appeared. No access. I am a patient man, some of the time. Maybe their systems are busy handshaking. I waited. Nope. I waited some more. Nope. I gave up and waited overnight, and checked again this morning. Nope. Ok, dammit, I’ll click the “BOSS Setup” button again. I do that and what they show me is the confirmation page for my order again, with an active submit button. But wait… I got an invoice? Was I charged? Will I be charged again? Should I resubmit, or email Yahoo, or call my bank?

Maybe I should just not use the API. Oh, and did I mention that they have a “BOSS Setup” tutorial? It’s a download-only PDF. And 2/3 of it is about setting up ads.

Adding swap to an EBS-backed Ubuntu EC2 instance

Another one of those recipes I need to capture for future use. We were running a bunch of memory intensive processes on a medium (m2) Ubuntu instance on EC2 yesterday, and things were not going well. It looked like some processes were dying and being restarted. Poking around in the kernel messages we came across several events reading:

"Out of memory: Kill process ... blah blah"

Well, damn. There are less than 4GB of usable RAM on a medium, but it should be swapping, right? Wrong. We checked free and there was no swap file. That was my screw up, since I set up the instance and did not realize it had no swap. Turns out that what Amazon considers to be memory constrained instances (smalls and micros, for example) get some swap in the default config, but apparently mediums and larges do not. We decided to rebuild the instance as a large to get more RAM and also another processing unit, and add some swap at the same time.

I wasn’t able to figure out how to configure swap space in the instance details prior to launch, so I searched around and put together this recipe for adding it post-launch. This applies to Ubuntu EBS-backed instances. It should work generally for any debian-based distro, I think. If your instance is not EBS backed you can use many of the same techniques, but you’ll have to figure out the deltas because we don’t use any non-EBS instances. Anyway, to the details.

EBS-backed instances have their root volume on EBS, which is what EBS-backed means. But you don’t want to put swap space on EBS. EBS use incurs I/O charges, and although they are very low, they aren’t nothing. If you were to locate swap on EBS then there would at least be some chance of some process going rogue and causing a lot of swapping and associated costs. Not good.

Fortunately, EBS-backed instances still get so-called “ephemeral” or “instance storage” at launch. Unlike EBS this storage is physically attached to the system, and also unlike EBS it goes away when the instance is stopped (but not when it is rebooted). That’s why it is called “ephemeral.” Something else that is ephemeral is the data stored in a swap file, so it seems like instance storage and swap files are made for each other. Good news: every EBS-backed instance on EC2 still gets ephemeral storage by default. It consists of either a 32GB SSD, or two 320GB magnetic disks, depending on your choices. For instances that do not get swap allocated by default, this storage is mounted at /mnt after startup (either the SSD, or the first of the magnetic disks; if you want the second disk you need to mount it manually).

So before you start make sure your config matches what I’ve described above, i.e. you have an EBS-backed instance with ephemeral storage mounted at /mnt (which you can confirm with the lsblk command), and no swap space allocated (which you can confirm with the swapon -s command).

First thing to do is create a swapfile in /mnt. The one thing I am not going to do is opine on what size it should be, because there is a lot of info out there on the topic. In this example I made the swap 2GB.

# sudo dd if=/dev/zero of=/mnt/swapfile bs=1M count=2048

This command just writes two gigs worth of 0’s to the file ‘swapfile’ on /mnt. Next make sure the permissions on this new file are set appropriately.

# sudo chown root:root /mnt/swapfile
# sudo chmod 600 /mnt/swapfile

Next use mkswap to turn the file full of 0’s into an actual linux swapfile.

# sudo mkswap /mnt/swapfile
# sudo swapon /mnt/swapfile

The first command formats the file as swap space (I actually have no idea what it does at a low level, so ‘format’ might be a wildly incorrect term to use), and the second sets it up as the system swap file. Now you should update fstab so that this swap space gets mounted and used when the system is rebooted. Use your preferred editor and add the following line to /etc/fstab:

/mnt/swapfile swap swap defaults 0 0

Lastly, turn on swapping.

# swapon -a

You can now use swapon -s to confirm that the swap space is in use, and the free or top commands to confirm the amount of swap space. One thing to note is that this swap space will not survive an instance stop/start. When the instance stops the ephemeral storage will be destroyed. On restart the changes made in fstab and elsewhere will still be there, because the system root is on EBS, but the actual file we created at /mnt/swapfile will be gone, and will need to be recreated.

 

Running haproxy at system startup on Ubuntu

Sometimes I just have to capture this sort of thing because I know damn well I am going to forget it long before the next time it’s needed. We decided to use haproxy as a round-robin matchmaker in front of a cluster of elasticsearch nodes. I installed it on Ubuntu 14.04, configured it, and tested it by running it from the command line. Everything worked. But when I rebooted the instance the proxy didn’t start. I checked the init file and run levels. Everything seemed as it should be… but no dice. To make it worse, executing commands like…

sudo service haproxy start

or

sudo service haproxy status

… resulted in nothing at all. No errors, no output, nada. I searched around a little bit and stumbled on the following line in /etc/default/haproxy…

ENABLED=0

Apparently this flag, when set to 0, prevents the init script from running the proxy. The second-hand info I found on the reasoning is that there is no rational default config for haproxy, so they want you to explicitly configure it and then set this flag to 1. So… fine. Makes sense, but I do wish the init script was a little more verbose about what was happening. In any case, once I did this…

ENABLED=1

… haproxy launched at start as desired.

Elasticsearch discovery in EC2

Elasticsearch out of the box does a good job of locating nodes and building a cluster. Just assign everyone the same cluster name and ES does the rest. But running a cluster on Amazon’s EC2 presents some additional challenges specific to that environment. I recently set up a docker-ized Elasticsearch cluster on two EC2 medium instances, and thought I would put down in one place the main things that I ran into setting it up.

The most fundamental constraint in EC2 is that multicast discovery (unsurprisingly) will not work. EC2 doesn’t allow multicast traffic. This gives you two choices: use unicast discovery, or set up EC2 dynamic discovery. Unicast discovery requires that all nodes have the host IPs for the other nodes in the cluster. I don’t like that idea, so I chose to set up dynamic discovery using the EC2 APIs. Getting EC2 dynamic discovery to work requires changes in three places: the Elasticsearch installation, the configuration file at /etc/elasticsearch/elasticsearch.yml, and finally the instance and security configuration on Amazon. I’ll take these in order.

The best way to integrate EC2 into the Elasticsearch discovery workflow is to use the cloud-aws plugin module. You can install this at any time with the following command:

sudo /usr/share/elasticsearch/bin/plugin -install \
elasticsearch/elasticsearch-cloud-aws/2.0.0.RC1

This will pull the latest plugin from the download site and install it, which is basically just extracting files. Note that the version in the command is the latest one. You can check here to see if it is still current. And that’s all there is to that. Adding the cloud-aws plugin enables the discovery.ec2 settings in elasticsearch.yml, which is where we’ll head next.

The Elasticsearch config file is located in /etc/elasticsearch/elasticsearch.yml, and we’ll want to change/add a few things there. First, obviously, give everyone the same cluster name:

cluster.name: my_cluster

Another setting that makes sense in production, at least, is to require the cloud-aws plugin to be present on start:

plugin.mandatory: cloud-aws

The next two settings are required for the cloud-aws plugin to communicate on your behalf with the great AWS brain in the sky:

cloud.aws.access_key: NOTMYACCESSKEYATALL
cloud.aws.secret_key: ITw0UlDnTB3seCRetiFiPuTItHeR3

Not to digress too much into AWS access management, but if you’ve set things up the right way then the access key and secret key used above will be for an IAM sub-account that grants just the specific permissions needed. The next two settings deal with discovery directly:

discovery.type: ec2
discovery.zen.ping.multicast.enabled: false

The first one just sets the discovery type to use the EC2 plugin. Pretty self-explanatory. The second one disables the use of multicast discovery, on the principle of “it doesn’t work, so don’t try it.” The last two settings we’ll discuss can be seen as alternatives to one and other, but are not mutually exclusive, which requires a bit of explanation.

Basically, we may need to filter the instances that get returned to the discovery process. When the cloud-aws plugin queries EC2 and gets a list of addresses back, it is going to assume they are all Elasticsearch nodes. During discovery it will try to contact them, and if some are not actually nodes it will just keep trying. This behavior makes sense with the multicast discovery process, because if you are not listening for multicast traffic then you don’t respond to it. But the EC2 discovery APIs will return all the instances in an availability zone, so we need some way to identify to Elasticsearch discovery which ones are really nodes.

One way to do this is by using a security group. You can create a security group and assign all the instances that you want in a particular cluster to that group, then make the following addition to the config:

discovery.ec2.groups: my_security_group

This setting tells the plugin to return only those instances that are members of the security group ‘my_security_group.’ Since you will need a security group anyway, as explained below, this is a convenient way to separate them from the crowd. But there can be cases where you don’t want to partition on the security group name. You might, for example, want to have one security group to control the access rules for a set of instances representing more than one cluster. In that case you can use tags:

discovery.ec2.tag.my_tag: my_tag_value

This setting tells the plugin to return only those instances that have the tag ‘my_tag’ with the value ‘my_tag_value.’ This is even more convenient, since it doesn’t involve mucking about with security groups, and setting the tag on an instance is easily done. Finally, as mentioned before these aren’t mutually exclusive. You can use the groups option to select a set of instances, and then partition them into clusters using tags.

And that’s it for the elasticsearch.yml settings, or at least the ones I had to change to make this work on EC2. There are a lot of other options if your specific case requires tweaking, and you can find an explanation of them here. The last thing I want to go into are the necessary steps to take in the Amazon EC2 console with respect to configuration. These fall into two areas: security groups and instance configuration. I don’t want to digress far into the specific AWS console steps, but I’ll outline in general what needs to happen.

Security groups first. You’re going to need all the nodes in your cluster to be able to talk to each other, and you’re going to want some outside clients to be able to talk to the nodes in the cluster as well. There are lots of different cases for connecting clients for querying purposes, so I’m going to ignore that topic and focus on communications between the nodes.

The nodes in a cluster need to use the Elasticsearch transport protocol on port 9300, so you’ll need to create a security group, or modify an existing one that applies to your instances, and create a rule allowing this traffic. For my money the easiest and most durable way to do this is to have a single security group covering just the nodes, and to add a rule allowing inbound TCP traffic on port 9300 from any instance in the group. If you are using the discovery.ec2.groups method discussed above, make sure to give your group the same name you used in the settings.

The last point is instance configuration, and for my purposes here it’s really just setting the tag or security group membership appropriately so that the instance gets included in the list returned from discovery. There are lots of other specifics regarding how to set up an instance to run Elasticsearch efficiently, but those are topics for another time (after I figure them out myself!)

The very last thing I want to mention is a tip for Docker users. If you’re running Elasticsearch inside a container on EC2 your discovery is going to fail. The first node that starts is going to elect itself as master, and if you query cluster health on that node it will succeed and tell you there is one node in the cluster, which is itself. The other nodes will hang when you try the same query and that is because they are busy failing discovery. If you look in /var/log/elasticsearch/cluster_name.log you’re going to see some exceptions that look like this:

[2014-03-18 18:20:04,425][INFO ][discovery.ec2            ]
[MacPherran] failed to send join request to master 
[[Amina][Pi6vJ470SYy4fEQhGXiwEA][38d8cffbdcd5][inet[/172.17.0.2:9300]]],
reason [org.elasticsearch.transport.RemoteTransportException:
[MacPherran][inet[/172.17.0.2:9300]][discovery/zen/join]; 
org.elasticsearch.ElasticsearchIllegalStateException: 
Node [[MacPherran][aHWxgYAhSpaWlPEXNhs7RA][4fb92271cce6][inet[/172.17.0.2:9300]]] 
not master for join request from 
[[MacPherran][aHWxgYAhSpaWlPEXNhs7RA][4fb92271cce6][inet[/172.17.0.2:9300]]]]

I cut out a lot of detail, but basically the reason this is happening is that, in this example, node MacPherran is talking to itself and doesn’t know it. The problem is caused because a running Docker container has a different IP address than the host instance it is running on. So when the node does discovery it first finds itself at the container IP, something like:

[MacPherran][inet[/172.17.0.2:9300]]

And then finds itself at the instance IP returned from EC2:

[MacPherran][inet[/10.147.39.21:9300]]

From there things do not go well for the discovery process. Fortunately this is easily fixed with another addition to elasticsearch.yml:

network.publish_host: 255.255.255.255

This setting tells Elasticsearch that this node should advertise itself as being at the specified host IP address. Set this to the host address of the instance and Elasticsearch will now tell everyone that is its address, which it is, assuming you are mapping the container ports over to the host ports. Now, of course, you have the problem of how to inject the host IP address into the container, but hopefully that is a simpler problem, and is left as an exercise for the reader.

Docker: run startup scripts then exit to a shell

As I’ve mentioned before the way Docker is set up it expects to run one command at container start up, and when that command exits the container stops. There are all sorts of creative ways to get around that if, for example, you want to mimic init behavior by launching some daemons before opening a shell to work in the container. Here’s a little snippet from a Docker build file that illustrates one simple approach:

CMD bash -C '/path/to/start.sh';'bash'

In the ‘start.sh’ script (which I usually copy to /usr/local/bin during container build) you can put any start up commands you want. For example the one in the container I’m working on now just has:

service elasticsearch start

The result when launching the container is that bash runs, executes the command which spools up es, and then executes another shell and quits:

mark:~$ sudo docker run -i -t mark/example
 * Starting Elasticsearch Server                                         [ OK ] 
root@835600d4d0b2:/home/root# curl http://localhost:9200
{
  "status" : 200,
  "name" : "Box IV",
  "version" : {
    "number" : "1.0.1",
    "build_hash" : "5c03844e1978e5cc924dab2a423dc63ce881c42b",
    "build_timestamp" : "2014-02-25T15:52:53Z",
    "build_snapshot" : false,
    "lucene_version" : "4.6"
  },
  "tagline" : "You Know, for Search"
}
root@835600d4d0b2:/home/root# exit
exit
mark:~$

Docker Builds and -no-cache

I was building out a search server container with Elasticsearch 1.0.1 today, and I ran into one of those irritating little problems that I could solve a lot faster if I would just observe more carefully what is actually going on. One of the steps in the build is to clone some stuff from our git repo that includes config files that will get copied to various places. In the process of testing I added a new file and pushed it, then re-ran the build. Halfway through I got a stat error from a cp command that couldn’t find the file.

But, but, I had pushed it, and pulled the repo, so where the hell was it? Yesterday something similar had happened when building a logstash/redis container. One of the nice things about a Docker build is that it leaves the interim containers installed until the end of the build (or forever if you don’t use the -rm=true option). So you can start up the container from the last successful build step and look around inside it. In yesterday’s case it turned out I was pushing to one branch and cloning from another.

But that problem had been solved yesterday. Today’s problem was different, because I was definitely cloning the right branch. I took a closer look at the output from the Docker build, and where I expected to see…

Step 4 : RUN git clone blahblahblah.git
 ---> Running in 51c842191693

I instead saw…

Step 4 : RUN git clone blahblahblah.git
 ---> Using cache

Docker was assuming the effect of the RUN command was deterministic and was reusing the interim image from the last time I ran the build. Interestingly it did the same thing with a later wget command that downloaded an external package. I’m not sure how those commands could ever be considered deterministic, since they pull data from outside sources, but whatever. The important thing is you can add the -no-cache option to the build command to get Docker to ignore the cache.

sudo docker build -no-cache -rm=true - < DockerFile

Note that this applies to the whole build, so if you do have some other commands that are in fact deterministic they are not going to use the cache either. It would be nice to have an argument to the RUN command to do this on per-step basis, but at least -no-cache will make sure all your RUN steps get evaluated every time you build.

Logstash ate my events

I’ve mentioned previously that I’m testing some infrastructure components for a new system at work. One of those components is Logstash, which along with redis and Elastic Search will function as our distributed logging system. As a test setup I have python scripts generating log messages into a redis list, which is then popped by Logstash for indexing into Elastic Search. The whole thing is running in a Docker container, the building of which I discussed in my last post. These processes only generate a couple of events per second, based on the other work they are doing. I’m running four of them for now, so I am getting six to eight events per second written to redis. This is very low volume, but sufficient for me to get a handle on how everything works.

The first time I ran the test and connected to Kibana to see the collected events stream I thought it was so cool that I just paged around with a silly smile on my face for fifteen minutes, without paying a lot of attention to the details. The next time I ran it I noticed that some events seemed to be missing. I know the pattern of events emitted by these scripts very well, and I wasn’t seeing them all. So I shut down, checked a few things, convinced myself that no logic in the code could cause that pattern, and then fired the test back up again. This time things looked a lot closer to normal, but I couldn’t quite be sure without having another definitive source to compare to.

Fortunately I had already built in code to enable/disable both file-based and redis-based logging, so I simply enabled file logging and rebuilt the container. I fired it up, ran a short test, and the number of lines in the log files exactly matched the number of events in the logstash index on ES. The events in Kibana looked complete, too. So ok, problem solved.

I got off on other things for a day or two, and when I came back and started running this test again I forgot to disable file logging. I realized it after a short time and killed the test. Before rebuilding the container I decided to check the counts again. They were off. There were something like 500 fewer events in the logstash index than in the files, out of a total of several thousand. I started the test again, intending to run a longer one and do a detailed comparison, and this time the earlier pattern I had seen, with obviously missing events, was back. I let this test run for a bit and then compared counts: ~6500 in the files vs. ~3200 in the index. Half my events had disappeared.

I was pretty sure the culprit wasn’t redis, which is pretty bulletproof, but to quickly rule it out I shut down logstash and ran the test. With nothing reading them the events piled up in redis and at the end llen show the counts to be exactly the same. Redis wasn’t mysteriously losing anything. I next checked the logstash log, but there was nothing in it of note. I connected directly to ES and queried the index. The counts all matched with what Kibana had shown me. There was no indication anywhere as to why half the events had evaporated.

I was poking around on the logstash site looking for clues when I noticed the verbosity settings in the command line. I quickly edited the init.d script that launches the daemon to set -vv, the highest logging verbosity level for logstash. I reran the test and again saw the same pattern of missing events. I let it run for a minute or so and then shut it down. The logstash.log file was now over 7 megs, compared to a few hundred k in the last test. I dove in and quickly noticed some Java exceptions getting logged. These were field parser errors, caused by an inner number format exception, and the field that caused the issue was named ‘data.’ I grep’d the log to see how many of these there were, and the number came back ~3200. Well, whaddaya know?

The data field is one that I use in my log event format to contain variable data. In one message it may have a number, and in another a string. Looking at the exceptions in the log made it very clear what was going on: ES was dynamically assigning a type to the field based on the data it received. Not all my log events include data, and based on the results of processing the first populated ‘data’ field to get into the queue might have a number, or it might have a string. If the string field arrived first all was well, because ES could easily convert a number to a string. But if a number field arrived first ES decided that all subsequent data had to be converted to a number, which was not possible in about half the cases.

The solution turned out to be pretty simple. You can include a custom template for your index in the logstash configuration, and this can include field mappings. In order to make this work through logstash you have to be using ES version 0.90.5 or later. Fortunately I am using the embedded instance for this test, which is at 0.90.9. To implement the template solution required modifying the logstash output to reference the template:

output {
  elasticsearch {
    embedded => true
    template => "/opt/logstash/es-event-template.json"
  }
}

It also required the template itself. At first I tried creating a very simple template that only added the single field mapping I needed. That didn’t work, and I assumed it was because I was overwriting logstash’s settings in  way that was breaking things. I had hoped my single setting would just get merged in, but either that’s not happening or there is something else I don’t understand. What I ended up doing is grabbing the current template from:

http://myhost.com:9200/_template

And then editing it to add the part I needed:

"data": { 
    "index": "not_analyzed",
    "type": "string"
}

And with that the problem was solved. The lesson of the day for me is: if logstash appears to be eating your events, crank up the logging verbosity level and see what’s giving it indigestion.