Monday, July 21, 2014

Adventures in Running a Personal Minecraft Server for Linux Admins

    So, I've been playing Minecraft for several years now. My wife, however, just started playing recently. Naturally, we wanted to play together. Sometime back in the history of Minecraft, the multi-player server code was merged in with the original single player game code, and thus even local single player games are run internally with a client/server architecture.  Because of this fact, Minecraft has has a nifty feature that lets you take a single player game you are playing, and "Open to LAN" which makes it multiplayer.

    My laptop was powerful enough to run the game with minimal lag for either of us, but this meant that she could not play if she had down time during the day while I was at work. We tried hosting the game off her laptop instead, but it could not handle the load and caused lots of lag and other problems for both of us, enough to make a survival game unplayable. The obvious solution to this was to run a server.

    Now, Mojang recently started their own hosting for Minecraft, known as Minecraft Realms. This is a nice self-contained option that handles all the administration and operation for you. There were just two problems. 1) I'm a Linux Engineer. If word gets out that I'm letting someone else run a server for me, I'll never hear the end of it at work. 2) More seriously, there is a monthly fee for using Minecraft Realms. It isn't much, clearly a lot less than purchasing hosting to run your own server. However, I don't have to purchase hosting. I get an allowance of cloud servers through my work (Rackspace, disclaimer at the bottom of the page, all views are my own and not theirs.) and I'm not currently using it all. So, I can have a decent quality server to use for free. Now, I just need to figure out how.

Thursday, August 30, 2012

On the Abstraction of the Computer

I have a theory. I've had this theory for a while, and occasionally will subject those around me to a description of it. This is going to be one of those times.


 First, a little background. I strongly think of the computer as a tool that enables us to perform tasks. Some of these tasks we did before computers were available, like balancing our checkbook, and the computer enables us to perform the task faster or more accurately or in some way better or easier. Other tasks did not exist before the computer came along, like surfing the web. Either way, the computer is simply a tool, and the tasks are really the important part of the equation.

Really, I think of the computer as a meta-tool, a toolbox or a workshop basically. The applications are the individual tools. I think understanding this idea, in some form, is one of the key factors in whether a novice user progresses to a state where they are comfortable with alternate browsers like Chrome or are stuck in the "Internet Explorer == The Internet" mindset. Recognizing that the internet is the destination, the browser is simply the tool of choice for getting there, and that there is more that one tool available that will work.

I use the browser in this example because these days it is the most common application that a friend or family member will eventually suggest a replacement for. I really do think that when this happens it is essentially a skill growth check for the user, to use RPG terms for a moment. If they get it, then it usually clicks for them, and they proceed to eventually discover other alternate applications. If they don't get it, then their skill growth is limited, at least in this area, until the next time they encounter this idea.

So, having laid out the background, let me tell you about my theory. My theory is that sometime in the near future, the act of computing will become abstracted and separated from the actual tool, the computer itself. I've had this theory since before tablets started getting popular, and it keeps getting more convincing to me.

When I imagine this in my head, I always think of a fictional house, one with multiple computers in it. In this house, in the present time, if I want to use my personal finance software to work on my family's budget, I will go to the computer in our home office, because that is where I have that software installed. If my kids want to play a computer game, they will go to the computer in the game room, because it has the fancy graphics card and nice monitor. If I want to look up some random piece of trivia while watching a movie, I will grab a smartphone or a tablet or maybe a chromebook, depending on what is within reach. I pick the computer to use for each task based on its convenience of access as well as its capabilities (is the app installed here, does it have the needed hardware).

Imagine that I have 7 computing devices in my house. That isn't that many, I promise. I could meet that with 2 smartphones, 1 laptop, 1 tablet, 2 gaming pcs, and 1 office pc. I didn't even get a home theater pc into the setup. How many of those devices are in use at any given point in time? How much processing power is going unused? It is really inefficient.

Look at the enterprise IT world, and this sort of scenario is where the growth of virtualization came from. Instead of 10 separate servers for 10 different apps, each of which consumes 10% cpu on average, combine them onto 1 physical server, and let the CPU run at 100%. Or onto 2, let it run at 50%, and you have spare capacity in case one server dies.

This is the reason I think computing will become abstracted. What if the computing resources in your house were a pool, instead of islands? What if your apps were centralized, and no matter what computer I was at I could access anything I wanted to? That lets me use the most convenient computer, which means I don't have to go upstairs to the office to work on the budget, because I can access that application from the laptop, or in the future, from the tablet or smartphone.

But what about the gaming machines? What about specialized hardware? Well, I don't have a present day solution, but I imagine that using an On-live type of system, that will also become part of the pool. What if I could sit down at the tablet, and it would use the graphics card from one of the gaming computers (that nobody else was using) to render the graphics for the game I wanted to play? What if any screen, keyboard, mouse (or touchscreen) combination could use any of the computing resources in the house to perform any task you wanted. Then you wouldn't have to worry about whether the machine had the capabilities. Once you had the capabilities in your home computing pool, they could be accessed from any device. Now it would just be an issue of using the most convenient device.

But, you would still be picking based on the size of the display, whether it has a mouse or a touchscreen, etc. Honestly, I think that will go away too. In my vision, I see a time when any surface can become a display and/or an input device. Sure, you could still have dedicated monitors and mice for those times you really want to play a game and reflexes matter. But for that time you are in the kitchen and want to look up that recipe your mom emailed you, it can pop up on some unused counter space as a display with touch controls. Want to finish a movie while relaxing in a hot bath, it can be displayed on the wall in the bathroom.  Essentially, this is a world where every surface could be a display on demand, and the actual computer boxes themselves have faded away to a room where you plug in computing modules to add capacity.

I think once everything can be a computer, that the computer as we think of it will be an outdated concept. That the idea of having to go upstairs to work on your budget, instead of clearing some space and sitting down at the kitchen table and having it be displayed there, will seem as old fashioned as driving around without turn by turn GPS on your phone with Pandora streaming in the background.

Combine this with wireless technology, and mobile devices can join the pool and access your resources when you are at home, and leave it and run on their own when you are gone. These would even work for devices like Google Glass, or other future technologies.

That is my theory. It's really more of a vision. I'm not claiming the implementation details will be correct, but I think that at a general level, it is the direction we are headed.

Now, why did I decide to share this with you today? I was listening to a podcast today, and they were talking about VDI (Virtual Desktop Infrastructure). This is technology like Citrix boxes that provide virtual desktops and apps to you. Virtual, in that they are run on the centralized hardware, but displayed on your screen. It's the new thin client.

They were speculating a little about where it was headed, and commented that the future might be one where we no longer bought a computer, but bought a desktop with apps installed instead, and just accessed the same one from wherever we were.

That got me thinking about my theory that also had to do with computers going away, and that is why I decided to write this down finally.

Friday, March 23, 2012

SSL, Tomcat, Android, and keeping my sanity

Recently, at work, I was tasked with getting some SSL certs installed and working on a tomcat installation. This was a bit outside my normal duties, as work is rather segregated, and tomcat falls under an application admin's responsibilities, not a server admin's. However, there isn't an app admin available who knows tomcat, so I was given the job by virtue of competence, and having built the server. For reference, the OS was RHEL 5.7, running Tomcat 6.0.33, and trying to use JSSE for SSL.

Our normal setup for web servers is apache, sometimes using the Cool Web Stack or something like it, so tomcat isn't something that my team has any familiarity with. Additionally, I built this server, but that was just the OS (RHEL). A third party installed the tomcat application server with grails on top of it, for the purpose of hosting some mobile apps (Android and iPhone) created using their toolkit. They set this up without SSL, and in looking through their available documentation, the only reference I could find to SSL was a single footnote on a document about the security of the system which essentially said, since you asked, of course this should all be done over SSL. Just ignore the fact that none of our documentation or reference implementations bother to do so.

So, I set about trying to get the SSL cert working, armed with a set of instructions (team standard procedures) for doing so with apache, and a single page from our wiki on setting up SSL in Tomcat. This page was proof that someone had done so in the past, but it consisted of a couple command lines to run, some java source code to be compiled and then invoked, and no reference to the implementation details of telling tomcat to use the SSL cert itself. The command line invocations converted the standard x509 cert we received from our CA and the key we generated when making the CSR into another format, DER.The java source code formed a program which would read in the der formatted certificate and key, and convert them into a Java keystore (JKS) formatted file. The instructions were for Solaris, our main OS, and not RHEL which these servers were running. The java program wouldn't compile, because the JDK that the third party installed for use with tomcat seemed to have a broken compiler! I installed a new jdk from the RHEL repos, and found that the source code didn't have any include statements, which also caused problems. A bunch of wildcard based includes later, I had a compiled program and a freshly created keystore. I installed it into tomcat, using some helpful online instructions, made a mental note that I wanted to come back at some point and find a way to convert the SSL cert without using a custom compiled program, because that seems like overkill for a problem where standard tools should exist, and continued on my way, after verifying that the site was accessible over SSL now.

I did end up finding a way to convert from the openSSL cert to a java keystore without using a custom compiled java program. After much, much searching, I found a tough to navigate site that was stuffed with useful information! This page shows how to use the openssl tool to combine a key and a cert into one PKCS12 file. Then, this page shows how to use the java (or JDK maybe) command keytool to import the PKCS12 file into a Java KeyStore file. Thus, we are now able to use two commands where before we used 2 different commands & a custom compiled java program. (In theory, we don't have to perform the conversion to a Java KeyStore format, as Tomcat can be told to use the PKCS12 file directly as a keystore. However, this involves more poorly documented tomcat configuration, and I didn't want to keep pressing my luck once I got everything working. If someone else wants to try for efficiency later on, then they are more than welcome to it.)

Those in charge of this project then decided I should turn off all non-SSL traffic to the servers. After doing this, they discovered that they could not download the new APK files to their Android phones from the server over SSL. Android was throwing an untrusted certificate error (the kind you expect with a self-signed cert, not a CA issued one) and will silently fail to download files from a server over SSL in this scenario.

I suspected that the intermediate certs were not being handed out correctly, and was eventually able to prove this with the help of these two sites. Our internal instructions said to import the intermediate cert from our CA into the JKS file with an alias of intermediateca. The official instructions said to import it with an alias of root. Somewhere else online said to use an alias of intermediate. I tried all of these, as well as combining them all, with no luck. I looked through the documentation, and could find no mention of a specific alias name to use for tomcat to magically pick it up and serve it out.

I went searching again, and finally stumbled upon this question on stack overflow. This was the same problem I was having, so I tried the solution, but ran into problems. They placed the intermediate cert into /etc/ssl/certs, then ran the command to create the PKCS12 file with an additional flag -chain. RHEL doesn't have an /etc/ssl/certs, so I searched, and found the equivalent at /etc/pki/tls/certs. I tried placing the intermediate cert there, and running the command with -chain added, and got errors because the cert wasn't found. I then went looking to see what options could be passed to the openssl command, and found the -CAfile and -caname flags. Using these, I was able to use the -chain flag and eventually get a Java KeyStore that caused tomcat to serve out the intermediate cert correctly.


After some experimenting, I finally isolated what creates a working keystore. The -chain flag with the openssl command is the critical key. Combine this with -CAfile to create the PKCS12 file with the intermediate cert included. The -caname flag ends up to not be needed at all. Importing the intermediate certs into the JKS (Java KeyStore) file with an alias, doesn't matter at all. (It doesn't break anything to have them there, but it also isn't needed for it to work.) Counterintuitively, the working JKS file will only show to contain one cert when viewed with keytool -list.

[root@fido sslcerts]# keytool -list -keystore fido.jks 
Enter keystore password:  

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

tomcat, Mar 22, 2012, PrivateKeyEntry, 
Certificate fingerprint (MD5): 83:F5:A4:7D:2A:39:35:FB:8B:41:B7:34:B5:97:45:92

I was able to verify with both of the SSL checking sites above that this file will serve out the intermediate certs correctly. On Android, it no longer throws the untrusted cert error, and now silently validates. So, now, we have a new working procedure that only requires the same files we were downloading from the CA before, and uses two commands to turn them into a working Java KeyStore that will serve out intermediate certs correctly
openssl pkcs12 -export -inkey fido-2012-03-15.key -in fido-2012-03-15-cert.cer \
-out fido_key_cert_chain.p12 -chain -name tomcat -CAfile fido-2012-03-15-interm.cer 

keytool -importkeystore -srckeystore fido_key_cert_chain.p12 -srcstoretype pkcs12 \
-srcstorepass changeme -destkeystore fido.jks -deststoretype jks -deststorepass changeme
 

Tuesday, November 22, 2011

Citrix Receiver on 64-bit Arch Linux

I recently made the switch from Ubuntu to Arch Linux (with E17) on my workstation at work, and I am now in the process of getting all my apps setup and working again. Pretty high up on that list was the Citrix receiver. Work is a largely Windows based setup, with Exchange for e-mail. I actually prefer a few things about the newer versions of Outlook, when compared with most of the Linux clients I have tried in the past.
This means my choices boil down to:
  1. Get Outlook running under Wine (yeah, that'll work out real well) 
  2. Run Outlook in a Windows VM (I have this setup, but prefer not to run the VM at all times, it is a real memory hog)
  3. Run a separate computer with Windows on it, just for Outlook (not going to happen)
  4. Get the Citrix receiver working, and use the Citrix version of Outlook that we make available
I chose #4. The problem here is not the linux part, install packages exist for the receiver on linux. The problem is the 64-bit part. Out Citrix server only hands out a 32-bit installer. I went looking, and while Citrix has started offering 64-bit installers now, they only come in .deb or .rpm. There isn't a .tar.gz package like there is for 32-bit. Arch linux, of course, does not use .deb or .rpm packages.

I had the receiver installed on Ubuntu, and it worked mostly. It worked fine, but I could not run the manager app to change the settings, which meant that I could never map my local hard drive to show up as a drive in the software run on Citrix. For e-mail, this meant I could not save or send attachments, unless I used a USB stick for them, because the default settings would auto-mount USB devices.

I found these instructions on the Arch wiki, and followed the manual install instructions. Probably because I had already downloaded the package from Citrix, and sunk some time into getting the installer to run, so I didn't want to take the easy route out and use a pre-built package now.

Just to get the installer to run, I had to do some research, and finally figure out that the "no such file or directory" errors being thrown by echo_cmd were because I only had the 64-bit glibc libraries, and I needed to install the lib32-glibc from the multilib repo as well.

I followed the instructions on the wiki, making modifications as I went because my install of the receiver was in a different directory, and got a working install, except that I had problems getting firefox to see the Citrix plugin for some reason. I also was not able to get the manager app (wfcmgr) to run, despite the wiki article explicitly saying it should. I was getting this error:

/opt/Citrix/ICAClient/wfcmgr: error while loading shared libraries: libXm.so.4: cannot open shared object file: No such file or directory

I did some more digging, and found the 32-bit library package from the AUR containing libXm.so.4, aur-lib32-openmotif, installed into /opt/lib32/usr/lib directory, instead of into the /usr/lib32 directory where the wfcmgr program was trying to find it.

One way to fix this is with a simple

sudo ln -s /opt/lib32/usr/lib/libXm.so.4 /usr/lib32/libXm.so.4

however, I chose to modify the PKGBUILD to put the libraries in /usr/lib32 with all the others, in case a program went looking for one of the other openmotif libraries in the future.

I also figured out that my problem getting firefox to see the plugin was that I checked whether the plugin was setup with

sudo nspluginwrapper -l

using sudo here, because the wiki article showed the install command, nspluginwrapper -i, being run as root. This showed the plugin to be in place already:


/root/.mozilla/plugins/npwrapper.npica.so
  Original plugin: /opt/Citrix/ICAClient/npica.so
  Plugin viewer: /usr/lib/nspluginwrapper/i386/linux/npviewer
  Wrapper version string: 1.4.4-1


It took me too long to realize that this plugin was not system-wide, but was root-specific, and that I needed to do this as my user instead. I checked and it was not setup for my user, so I used the -i command to install it, and restarted firefox. It is now detected, which means I don't have to skip past the install prompt from the server when the silent-detection routine fails to find the plugin on my system.

In the end, I have a newer version of Citrix Receiver installed and I was able to setup my home directory to be mapped as a drive to be seen inside the Citrixed apps. I do not have the USB device support, because the installer can't figure out Arch's system for managing services, but I don't think I'll miss that.

    Tuesday, August 16, 2011

    Sign Criticism

    Driving home after work today, I saw one of those little signs stuck in the ground at a corner that said "Get Customer Leads" and had a phone number. The two thoughts that went through my head were: 1)  If you are so great at generating customer leads, then why aren't you contacting me instead of expecting me to call you?  2) If I were to call you,  would I end up on your list of customer leads that you sell to others?