I spent some time trying to get my dotfiles working with git-crypt. I really tried. However, I kept having problems with it. Bad problems, like accidentally committing unencrypted files to my repo. Undoing that means purging and re-writing history, which is a major pain.
Some of these mistakes were my fault. I thought everything was working after a reimaging and restore from backup, but had failed to re-initialize git-crypt so it didn't know it was supposed to be encrypting or with what keys. However, some were not. One of the problems is that due to the transparent way git-crypt works, a git show will always show you the plaintext changes, so there is no obvious indicator that something isn't going to be encrypted as expected. Git-crypt added a check command because of user feedback, which let's you check to make sure a file is being encrypted on commit. I ran into at least one situation where I checked a file with git-crypt check, and it reported that it would be encrypted, and yet on checkin it was still committed in plain text.
There is just too much danger of accidentally exposing the sensitive files you are trying to keep encrypted for me to have any confidence in continuing to use git-crypt as a solution. Which is a real shame, because the transparency which is its weakness is also its biggest strength. I was able to easily add it into the dotfiles script to handle unencrypting automatically which made it frictionless in normal use, at least on the receiving/installing end of things.
Tuesday, February 16, 2016
Tuesday, May 26, 2015
Dotfiles Part 2: I Knew It Couldn't Be That Easy
As I discussed in the first post, I wanted a place to store my dotfiles where I could easily pull them down onto a new system. Github has become a popular place for this. It makes sense. Github is a cloud based git repo that you can easily reach from anywhere you have an internet connection. Git is a VCS which makes it easy to track changes to text files, which dotfiles are by definition. A simple git clone on a new system, and you have all your files ready and waiting for you.
There are, of course, some issues with simply creating a git repo out of your entire home directory. So, many systems have been created which usually use symlinks to point into the repo controlled directory instead. There is even a nice listing available on the unofficial guide at github at http://dotfiles.github.io. It lists out some bootstrap systems to handle the symlinking and setup. It moves on to some app specific options, for things like shells and editors which may have extensive config and plugins that may be better managed with a dedicated system. Lastly, it covers general purpose dotfiles utilities, which may do more than just symlinking and syncing them for you.
I've started to figure out how I wanted to manage my dotfiles many times over the years, and never gotten much further than looking at this massive page of options, opening up many of them in tabs, and then getting overwhelmed or distracted. This time, however, I was determined to make a choice and start trying to implement it. Even if it didn't end up being the solution I use in the end, I needed to get started at some point and figure out what would and wouldn't work for me. So, I made an initial pick.
There are, of course, some issues with simply creating a git repo out of your entire home directory. So, many systems have been created which usually use symlinks to point into the repo controlled directory instead. There is even a nice listing available on the unofficial guide at github at http://dotfiles.github.io. It lists out some bootstrap systems to handle the symlinking and setup. It moves on to some app specific options, for things like shells and editors which may have extensive config and plugins that may be better managed with a dedicated system. Lastly, it covers general purpose dotfiles utilities, which may do more than just symlinking and syncing them for you.
I've started to figure out how I wanted to manage my dotfiles many times over the years, and never gotten much further than looking at this massive page of options, opening up many of them in tabs, and then getting overwhelmed or distracted. This time, however, I was determined to make a choice and start trying to implement it. Even if it didn't end up being the solution I use in the end, I needed to get started at some point and figure out what would and wouldn't work for me. So, I made an initial pick.
Monday, May 25, 2015
Dotfiles: More Than Meets the Eye
Dotfiles are pretty important. Anyone who has used a *NIX OS for a while will recognize the truth of this simple statement. For the uninitiated, Linux, Unix, and BSD (and OSX, which is a variant of BSD) operating systems have many things in common. One of these things is that files whose name begins with a period, like .config for example, are treated specially and hidden from the vanilla file list command, ls. Now, they aren't super secret or anything. A simple -a added to the ls command (making it ls -a) will show them again, they are just hidden by default. Due to the naming and pronunciation (.config would be spoken dotconfig by most people) they have become known collectively as dotfiles.
The assumption is that they will be system files, usually configuration settings and the like, that you won't work with every day and won't want cluttering up your file listings all the time. Which is a pretty good assumption considering that the things multiply like crazy. These days it isn't just dotfiles, you have entire subdirectories that hold all sorts of things for whatever program uses them. Example, I have a .weechat that holds all the config settings, plugins, and chat logs from my irc client. Many programs use these the way Windows programs use their directory under Program Files. It is a good system, as it keeps the config easily findable in the user's home directory and also keeps the config user specific.
Since they control the configuration of so many different pieces of your system, the dotfiles become important. They are how your system has been configured the way you like it. How it knows the behavior you want. They become very valuable, and over time, very difficult to replace. Some people spend years making small modifications to files like .bashrc and .vimrc until the resulting config is perfect. However, recreating it from scratch might be impossible as you won't remember what the setting you found on that obscure blog 3 years ago that solved this one issue was. You basically have to start over and tweak slowly until you arrive at another working, but inevitably different, config.
The assumption is that they will be system files, usually configuration settings and the like, that you won't work with every day and won't want cluttering up your file listings all the time. Which is a pretty good assumption considering that the things multiply like crazy. These days it isn't just dotfiles, you have entire subdirectories that hold all sorts of things for whatever program uses them. Example, I have a .weechat that holds all the config settings, plugins, and chat logs from my irc client. Many programs use these the way Windows programs use their directory under Program Files. It is a good system, as it keeps the config easily findable in the user's home directory and also keeps the config user specific.
Since they control the configuration of so many different pieces of your system, the dotfiles become important. They are how your system has been configured the way you like it. How it knows the behavior you want. They become very valuable, and over time, very difficult to replace. Some people spend years making small modifications to files like .bashrc and .vimrc until the resulting config is perfect. However, recreating it from scratch might be impossible as you won't remember what the setting you found on that obscure blog 3 years ago that solved this one issue was. You basically have to start over and tweak slowly until you arrive at another working, but inevitably different, config.
Thursday, August 30, 2012
On the Abstraction of the Computer
I have a theory. I've had this theory for a while, and occasionally will subject those around me to a description of it. This is going to be one of those times.
First, a little background. I strongly think of the computer as a tool that enables us to perform tasks. Some of these tasks we did before computers were available, like balancing our checkbook, and the computer enables us to perform the task faster or more accurately or in some way better or easier. Other tasks did not exist before the computer came along, like surfing the web. Either way, the computer is simply a tool, and the tasks are really the important part of the equation.
Really, I think of the computer as a meta-tool, a toolbox or a workshop basically. The applications are the individual tools. I think understanding this idea, in some form, is one of the key factors in whether a novice user progresses to a state where they are comfortable with alternate browsers like Chrome or are stuck in the "Internet Explorer == The Internet" mindset. Recognizing that the internet is the destination, the browser is simply the tool of choice for getting there, and that there is more that one tool available that will work.
I use the browser in this example because these days it is the most common application that a friend or family member will eventually suggest a replacement for. I really do think that when this happens it is essentially a skill growth check for the user, to use RPG terms for a moment. If they get it, then it usually clicks for them, and they proceed to eventually discover other alternate applications. If they don't get it, then their skill growth is limited, at least in this area, until the next time they encounter this idea.
So, having laid out the background, let me tell you about my theory. My theory is that sometime in the near future, the act of computing will become abstracted and separated from the actual tool, the computer itself. I've had this theory since before tablets started getting popular, and it keeps getting more convincing to me.
When I imagine this in my head, I always think of a fictional house, one with multiple computers in it. In this house, in the present time, if I want to use my personal finance software to work on my family's budget, I will go to the computer in our home office, because that is where I have that software installed. If my kids want to play a computer game, they will go to the computer in the game room, because it has the fancy graphics card and nice monitor. If I want to look up some random piece of trivia while watching a movie, I will grab a smartphone or a tablet or maybe a chromebook, depending on what is within reach. I pick the computer to use for each task based on its convenience of access as well as its capabilities (is the app installed here, does it have the needed hardware).
Imagine that I have 7 computing devices in my house. That isn't that many, I promise. I could meet that with 2 smartphones, 1 laptop, 1 tablet, 2 gaming pcs, and 1 office pc. I didn't even get a home theater pc into the setup. How many of those devices are in use at any given point in time? How much processing power is going unused? It is really inefficient.
Look at the enterprise IT world, and this sort of scenario is where the growth of virtualization came from. Instead of 10 separate servers for 10 different apps, each of which consumes 10% cpu on average, combine them onto 1 physical server, and let the CPU run at 100%. Or onto 2, let it run at 50%, and you have spare capacity in case one server dies.
This is the reason I think computing will become abstracted. What if the computing resources in your house were a pool, instead of islands? What if your apps were centralized, and no matter what computer I was at I could access anything I wanted to? That lets me use the most convenient computer, which means I don't have to go upstairs to the office to work on the budget, because I can access that application from the laptop, or in the future, from the tablet or smartphone.
But what about the gaming machines? What about specialized hardware? Well, I don't have a present day solution, but I imagine that using an On-live type of system, that will also become part of the pool. What if I could sit down at the tablet, and it would use the graphics card from one of the gaming computers (that nobody else was using) to render the graphics for the game I wanted to play? What if any screen, keyboard, mouse (or touchscreen) combination could use any of the computing resources in the house to perform any task you wanted. Then you wouldn't have to worry about whether the machine had the capabilities. Once you had the capabilities in your home computing pool, they could be accessed from any device. Now it would just be an issue of using the most convenient device.
But, you would still be picking based on the size of the display, whether it has a mouse or a touchscreen, etc. Honestly, I think that will go away too. In my vision, I see a time when any surface can become a display and/or an input device. Sure, you could still have dedicated monitors and mice for those times you really want to play a game and reflexes matter. But for that time you are in the kitchen and want to look up that recipe your mom emailed you, it can pop up on some unused counter space as a display with touch controls. Want to finish a movie while relaxing in a hot bath, it can be displayed on the wall in the bathroom. Essentially, this is a world where every surface could be a display on demand, and the actual computer boxes themselves have faded away to a room where you plug in computing modules to add capacity.
I think once everything can be a computer, that the computer as we think of it will be an outdated concept. That the idea of having to go upstairs to work on your budget, instead of clearing some space and sitting down at the kitchen table and having it be displayed there, will seem as old fashioned as driving around without turn by turn GPS on your phone with Pandora streaming in the background.
Combine this with wireless technology, and mobile devices can join the pool and access your resources when you are at home, and leave it and run on their own when you are gone. These would even work for devices like Google Glass, or other future technologies.
That is my theory. It's really more of a vision. I'm not claiming the implementation details will be correct, but I think that at a general level, it is the direction we are headed.
Now, why did I decide to share this with you today? I was listening to a podcast today, and they were talking about VDI (Virtual Desktop Infrastructure). This is technology like Citrix boxes that provide virtual desktops and apps to you. Virtual, in that they are run on the centralized hardware, but displayed on your screen. It's the new thin client.
They were speculating a little about where it was headed, and commented that the future might be one where we no longer bought a computer, but bought a desktop with apps installed instead, and just accessed the same one from wherever we were.
That got me thinking about my theory that also had to do with computers going away, and that is why I decided to write this down finally.
First, a little background. I strongly think of the computer as a tool that enables us to perform tasks. Some of these tasks we did before computers were available, like balancing our checkbook, and the computer enables us to perform the task faster or more accurately or in some way better or easier. Other tasks did not exist before the computer came along, like surfing the web. Either way, the computer is simply a tool, and the tasks are really the important part of the equation.
Really, I think of the computer as a meta-tool, a toolbox or a workshop basically. The applications are the individual tools. I think understanding this idea, in some form, is one of the key factors in whether a novice user progresses to a state where they are comfortable with alternate browsers like Chrome or are stuck in the "Internet Explorer == The Internet" mindset. Recognizing that the internet is the destination, the browser is simply the tool of choice for getting there, and that there is more that one tool available that will work.
I use the browser in this example because these days it is the most common application that a friend or family member will eventually suggest a replacement for. I really do think that when this happens it is essentially a skill growth check for the user, to use RPG terms for a moment. If they get it, then it usually clicks for them, and they proceed to eventually discover other alternate applications. If they don't get it, then their skill growth is limited, at least in this area, until the next time they encounter this idea.
So, having laid out the background, let me tell you about my theory. My theory is that sometime in the near future, the act of computing will become abstracted and separated from the actual tool, the computer itself. I've had this theory since before tablets started getting popular, and it keeps getting more convincing to me.
When I imagine this in my head, I always think of a fictional house, one with multiple computers in it. In this house, in the present time, if I want to use my personal finance software to work on my family's budget, I will go to the computer in our home office, because that is where I have that software installed. If my kids want to play a computer game, they will go to the computer in the game room, because it has the fancy graphics card and nice monitor. If I want to look up some random piece of trivia while watching a movie, I will grab a smartphone or a tablet or maybe a chromebook, depending on what is within reach. I pick the computer to use for each task based on its convenience of access as well as its capabilities (is the app installed here, does it have the needed hardware).
Imagine that I have 7 computing devices in my house. That isn't that many, I promise. I could meet that with 2 smartphones, 1 laptop, 1 tablet, 2 gaming pcs, and 1 office pc. I didn't even get a home theater pc into the setup. How many of those devices are in use at any given point in time? How much processing power is going unused? It is really inefficient.
Look at the enterprise IT world, and this sort of scenario is where the growth of virtualization came from. Instead of 10 separate servers for 10 different apps, each of which consumes 10% cpu on average, combine them onto 1 physical server, and let the CPU run at 100%. Or onto 2, let it run at 50%, and you have spare capacity in case one server dies.
This is the reason I think computing will become abstracted. What if the computing resources in your house were a pool, instead of islands? What if your apps were centralized, and no matter what computer I was at I could access anything I wanted to? That lets me use the most convenient computer, which means I don't have to go upstairs to the office to work on the budget, because I can access that application from the laptop, or in the future, from the tablet or smartphone.
But what about the gaming machines? What about specialized hardware? Well, I don't have a present day solution, but I imagine that using an On-live type of system, that will also become part of the pool. What if I could sit down at the tablet, and it would use the graphics card from one of the gaming computers (that nobody else was using) to render the graphics for the game I wanted to play? What if any screen, keyboard, mouse (or touchscreen) combination could use any of the computing resources in the house to perform any task you wanted. Then you wouldn't have to worry about whether the machine had the capabilities. Once you had the capabilities in your home computing pool, they could be accessed from any device. Now it would just be an issue of using the most convenient device.
But, you would still be picking based on the size of the display, whether it has a mouse or a touchscreen, etc. Honestly, I think that will go away too. In my vision, I see a time when any surface can become a display and/or an input device. Sure, you could still have dedicated monitors and mice for those times you really want to play a game and reflexes matter. But for that time you are in the kitchen and want to look up that recipe your mom emailed you, it can pop up on some unused counter space as a display with touch controls. Want to finish a movie while relaxing in a hot bath, it can be displayed on the wall in the bathroom. Essentially, this is a world where every surface could be a display on demand, and the actual computer boxes themselves have faded away to a room where you plug in computing modules to add capacity.
I think once everything can be a computer, that the computer as we think of it will be an outdated concept. That the idea of having to go upstairs to work on your budget, instead of clearing some space and sitting down at the kitchen table and having it be displayed there, will seem as old fashioned as driving around without turn by turn GPS on your phone with Pandora streaming in the background.
Combine this with wireless technology, and mobile devices can join the pool and access your resources when you are at home, and leave it and run on their own when you are gone. These would even work for devices like Google Glass, or other future technologies.
That is my theory. It's really more of a vision. I'm not claiming the implementation details will be correct, but I think that at a general level, it is the direction we are headed.
Now, why did I decide to share this with you today? I was listening to a podcast today, and they were talking about VDI (Virtual Desktop Infrastructure). This is technology like Citrix boxes that provide virtual desktops and apps to you. Virtual, in that they are run on the centralized hardware, but displayed on your screen. It's the new thin client.
They were speculating a little about where it was headed, and commented that the future might be one where we no longer bought a computer, but bought a desktop with apps installed instead, and just accessed the same one from wherever we were.
That got me thinking about my theory that also had to do with computers going away, and that is why I decided to write this down finally.
Friday, March 23, 2012
SSL, Tomcat, Android, and keeping my sanity
Recently, at work, I was tasked with getting some SSL certs installed and working on a tomcat installation. This was a bit outside my normal duties, as work is rather segregated, and tomcat falls under an application admin's responsibilities, not a server admin's. However, there isn't an app admin available who knows tomcat, so I was given the job by virtue of competence, and having built the server. For reference, the OS was RHEL 5.7, running Tomcat 6.0.33, and trying to use JSSE for SSL.
Our normal setup for web servers is apache, sometimes using the Cool Web Stack or something like it, so tomcat isn't something that my team has any familiarity with. Additionally, I built this server, but that was just the OS (RHEL). A third party installed the tomcat application server with grails on top of it, for the purpose of hosting some mobile apps (Android and iPhone) created using their toolkit. They set this up without SSL, and in looking through their available documentation, the only reference I could find to SSL was a single footnote on a document about the security of the system which essentially said, since you asked, of course this should all be done over SSL. Just ignore the fact that none of our documentation or reference implementations bother to do so.
So, I set about trying to get the SSL cert working, armed with a set of instructions (team standard procedures) for doing so with apache, and a single page from our wiki on setting up SSL in Tomcat. This page was proof that someone had done so in the past, but it consisted of a couple command lines to run, some java source code to be compiled and then invoked, and no reference to the implementation details of telling tomcat to use the SSL cert itself. The command line invocations converted the standard x509 cert we received from our CA and the key we generated when making the CSR into another format, DER.The java source code formed a program which would read in the der formatted certificate and key, and convert them into a Java keystore (JKS) formatted file. The instructions were for Solaris, our main OS, and not RHEL which these servers were running. The java program wouldn't compile, because the JDK that the third party installed for use with tomcat seemed to have a broken compiler! I installed a new jdk from the RHEL repos, and found that the source code didn't have any include statements, which also caused problems. A bunch of wildcard based includes later, I had a compiled program and a freshly created keystore. I installed it into tomcat, using some helpful online instructions, made a mental note that I wanted to come back at some point and find a way to convert the SSL cert without using a custom compiled program, because that seems like overkill for a problem where standard tools should exist, and continued on my way, after verifying that the site was accessible over SSL now.
I did end up finding a way to convert from the openSSL cert to a java keystore without using a custom compiled java program. After much, much searching, I found a tough to navigate site that was stuffed with useful information! This page shows how to use the openssl tool to combine a key and a cert into one PKCS12 file. Then, this page shows how to use the java (or JDK maybe) command keytool to import the PKCS12 file into a Java KeyStore file. Thus, we are now able to use two commands where before we used 2 different commands & a custom compiled java program. (In theory, we don't have to perform the conversion to a Java KeyStore format, as Tomcat can be told to use the PKCS12 file directly as a keystore. However, this involves more poorly documented tomcat configuration, and I didn't want to keep pressing my luck once I got everything working. If someone else wants to try for efficiency later on, then they are more than welcome to it.)
Those in charge of this project then decided I should turn off all non-SSL traffic to the servers. After doing this, they discovered that they could not download the new APK files to their Android phones from the server over SSL. Android was throwing an untrusted certificate error (the kind you expect with a self-signed cert, not a CA issued one) and will silently fail to download files from a server over SSL in this scenario.
I suspected that the intermediate certs were not being handed out correctly, and was eventually able to prove this with the help of these two sites. Our internal instructions said to import the intermediate cert from our CA into the JKS file with an alias of intermediateca. The official instructions said to import it with an alias of root. Somewhere else online said to use an alias of intermediate. I tried all of these, as well as combining them all, with no luck. I looked through the documentation, and could find no mention of a specific alias name to use for tomcat to magically pick it up and serve it out.
I went searching again, and finally stumbled upon this question on stack overflow. This was the same problem I was having, so I tried the solution, but ran into problems. They placed the intermediate cert into /etc/ssl/certs, then ran the command to create the PKCS12 file with an additional flag -chain. RHEL doesn't have an /etc/ssl/certs, so I searched, and found the equivalent at /etc/pki/tls/certs. I tried placing the intermediate cert there, and running the command with -chain added, and got errors because the cert wasn't found. I then went looking to see what options could be passed to the openssl command, and found the -CAfile and -caname flags. Using these, I was able to use the -chain flag and eventually get a Java KeyStore that caused tomcat to serve out the intermediate cert correctly.
After some experimenting, I finally isolated what creates a working keystore. The -chain flag with the openssl command is the critical key. Combine this with -CAfile to create the PKCS12 file with the intermediate cert included. The -caname flag ends up to not be needed at all. Importing the intermediate certs into the JKS (Java KeyStore) file with an alias, doesn't matter at all. (It doesn't break anything to have them there, but it also isn't needed for it to work.) Counterintuitively, the working JKS file will only show to contain one cert when viewed with keytool -list.
I was able to verify with both of the SSL checking sites above that this file will serve out the intermediate certs correctly. On Android, it no longer throws the untrusted cert error, and now silently validates. So, now, we have a new working procedure that only requires the same files we were downloading from the CA before, and uses two commands to turn them into a working Java KeyStore that will serve out intermediate certs correctly
Our normal setup for web servers is apache, sometimes using the Cool Web Stack or something like it, so tomcat isn't something that my team has any familiarity with. Additionally, I built this server, but that was just the OS (RHEL). A third party installed the tomcat application server with grails on top of it, for the purpose of hosting some mobile apps (Android and iPhone) created using their toolkit. They set this up without SSL, and in looking through their available documentation, the only reference I could find to SSL was a single footnote on a document about the security of the system which essentially said, since you asked, of course this should all be done over SSL. Just ignore the fact that none of our documentation or reference implementations bother to do so.
So, I set about trying to get the SSL cert working, armed with a set of instructions (team standard procedures) for doing so with apache, and a single page from our wiki on setting up SSL in Tomcat. This page was proof that someone had done so in the past, but it consisted of a couple command lines to run, some java source code to be compiled and then invoked, and no reference to the implementation details of telling tomcat to use the SSL cert itself. The command line invocations converted the standard x509 cert we received from our CA and the key we generated when making the CSR into another format, DER.The java source code formed a program which would read in the der formatted certificate and key, and convert them into a Java keystore (JKS) formatted file. The instructions were for Solaris, our main OS, and not RHEL which these servers were running. The java program wouldn't compile, because the JDK that the third party installed for use with tomcat seemed to have a broken compiler! I installed a new jdk from the RHEL repos, and found that the source code didn't have any include statements, which also caused problems. A bunch of wildcard based includes later, I had a compiled program and a freshly created keystore. I installed it into tomcat, using some helpful online instructions, made a mental note that I wanted to come back at some point and find a way to convert the SSL cert without using a custom compiled program, because that seems like overkill for a problem where standard tools should exist, and continued on my way, after verifying that the site was accessible over SSL now.
I did end up finding a way to convert from the openSSL cert to a java keystore without using a custom compiled java program. After much, much searching, I found a tough to navigate site that was stuffed with useful information! This page shows how to use the openssl tool to combine a key and a cert into one PKCS12 file. Then, this page shows how to use the java (or JDK maybe) command keytool to import the PKCS12 file into a Java KeyStore file. Thus, we are now able to use two commands where before we used 2 different commands & a custom compiled java program. (In theory, we don't have to perform the conversion to a Java KeyStore format, as Tomcat can be told to use the PKCS12 file directly as a keystore. However, this involves more poorly documented tomcat configuration, and I didn't want to keep pressing my luck once I got everything working. If someone else wants to try for efficiency later on, then they are more than welcome to it.)
Those in charge of this project then decided I should turn off all non-SSL traffic to the servers. After doing this, they discovered that they could not download the new APK files to their Android phones from the server over SSL. Android was throwing an untrusted certificate error (the kind you expect with a self-signed cert, not a CA issued one) and will silently fail to download files from a server over SSL in this scenario.
I suspected that the intermediate certs were not being handed out correctly, and was eventually able to prove this with the help of these two sites. Our internal instructions said to import the intermediate cert from our CA into the JKS file with an alias of intermediateca. The official instructions said to import it with an alias of root. Somewhere else online said to use an alias of intermediate. I tried all of these, as well as combining them all, with no luck. I looked through the documentation, and could find no mention of a specific alias name to use for tomcat to magically pick it up and serve it out.
I went searching again, and finally stumbled upon this question on stack overflow. This was the same problem I was having, so I tried the solution, but ran into problems. They placed the intermediate cert into /etc/ssl/certs, then ran the command to create the PKCS12 file with an additional flag -chain. RHEL doesn't have an /etc/ssl/certs, so I searched, and found the equivalent at /etc/pki/tls/certs. I tried placing the intermediate cert there, and running the command with -chain added, and got errors because the cert wasn't found. I then went looking to see what options could be passed to the openssl command, and found the -CAfile and -caname flags. Using these, I was able to use the -chain flag and eventually get a Java KeyStore that caused tomcat to serve out the intermediate cert correctly.
After some experimenting, I finally isolated what creates a working keystore. The -chain flag with the openssl command is the critical key. Combine this with -CAfile to create the PKCS12 file with the intermediate cert included. The -caname flag ends up to not be needed at all. Importing the intermediate certs into the JKS (Java KeyStore) file with an alias, doesn't matter at all. (It doesn't break anything to have them there, but it also isn't needed for it to work.) Counterintuitively, the working JKS file will only show to contain one cert when viewed with keytool -list.
[root@fido sslcerts]# keytool -list -keystore fido.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
tomcat, Mar 22, 2012, PrivateKeyEntry,
Certificate fingerprint (MD5): 83:F5:A4:7D:2A:39:35:FB:8B:41:B7:34:B5:97:45:92
I was able to verify with both of the SSL checking sites above that this file will serve out the intermediate certs correctly. On Android, it no longer throws the untrusted cert error, and now silently validates. So, now, we have a new working procedure that only requires the same files we were downloading from the CA before, and uses two commands to turn them into a working Java KeyStore that will serve out intermediate certs correctly
openssl pkcs12 -export -inkey fido-2012-03-15.key -in fido-2012-03-15-cert.cer \
-out fido_key_cert_chain.p12 -chain -name tomcat -CAfile fido-2012-03-15-interm.cer
keytool -importkeystore -srckeystore fido_key_cert_chain.p12 -srcstoretype pkcs12 \
-srcstorepass changeme -destkeystore fido.jks -deststoretype jks -deststorepass changeme
Subscribe to:
Posts (Atom)