tag:blogger.com,1999:blog-98960682024-02-20T13:32:07.283-06:00Chocolate BubblegumA Meandering Journey Through Whatever Interests Me at the Moment.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.comBlogger36125tag:blogger.com,1999:blog-9896068.post-12601512812552883412022-09-03T11:33:00.001-05:002022-09-03T11:33:30.096-05:00Why YAML, Why?<p> Last night I finally joined the ranks of those who despise YAML. I've always known (and known of) admins who loathe YAML. They rail against it every time they have to use it, or any time it is even mentioned. Primarily, they complain about it being white space sensitive.</p><p> In the past, I considered this an extreme reaction. An overreaction to a minor annoyance. I'd not encountered any problems in using YAML so far, even when making the same edits they complained bitterly about. I considered the indentation fairly logical, and figured that if you treated YAML a bit like a wild animal, by giving it some respect and moving slowly, that it wasn't very likely to bite you and everything would be fine. For a long time, it was. Until it wasn't.</p><p> I should preface that my bad experience with YAML did occur at almost 2am, so I wasn't functioning at peak capacity by any means. I was trying to implement <a href="https://www.authelia.com/" target="_blank">Authelia</a> for my self-hosted sites and it uses YAML for its config and its on disk user database both. I finished configuring it and launched it for the first time, then tried visiting my site, and got a 500 error. </p><p> Several rounds of troubleshooting later I found that the container was failing due to errors reading the config file. I found the line number where it began complaining about not finding expected keys and took a look, but couldn't see anything wrong. After many attempts, I ended up going back to the sample config file I had copied as the base for my file, and copied several lines to either side of the offending line. I then removed these same visually identical lines from my config, and pasted those from the sample config back in. Visually, nothing had changed, but the logs no longer complained about errors in the configuration.</p><p> Instead, they complained that the display name of my one user couldn't be an empty value. It wasn't empty. I tried several variations, before finally tracking down a sample file, commenting out my entry, pasting in the sample, and then modifying it with my user's details. And it worked. Annoyingly.</p><p> My best guess is that there was whitespace, either tabs vs spaces, at the ends of the lines, or on seemingly blank lines that was the source of the problem. Visually, there were no differences between the working and the non-working code. Indentation was the same. And of course, the error messages are very unhelpful. The problem wasn't that the value was empty, but that something else was causing the parsing to be incorrect.</p><p> So, freshly burned by this experience, I have reluctantly converted to the side of those who consider YAML a terrible format for configuration files. There are much more forgiving options out there that don't require dark magic or blind copy pasting to get working when nothing is visually wrong with the data. </p><p> You won't convince me that whitespace is a problem in Python though. Not yet at least.</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-20507715585484746922022-08-30T17:09:00.004-05:002022-08-30T17:11:53.223-05:00Changing terminal font in Visual Studio Code on Linux to get symbols working<p> I've been using VS Code lately, since Atom is being killed off. I also switch to using ZSH with OhMyZsh a while back. I had noticed that while using the integrated terminal in VS Code, all of my nice status icons were showing as empty boxes, indicating that the font didn't have symbol support compiled in.</p><p> I found <a href="https://youngstone89.medium.com/how-to-change-font-for-terminal-in-visual-studio-code-c3305fe6d4c2" target="_blank">this post</a> for fixing the problem on OSX, and was able to adapt it to my needs. </p><p style="text-align: left;"></p><ul style="text-align: left;"><li>Press Ctrl + Shift + P and choose Preferences: Open User Settings (JSON)</li><li>Add these lines (in my case) to your config:</li></ul><div><div style="background-color: #002b36; color: #839496; font-family: "Droid Sans Mono", "monospace", monospace; font-size: 14px; line-height: 19px; white-space: pre;"><div> <span style="color: #859900;">"terminal.integrated.defaultProfile.linux"</span>: <span style="color: #2aa198;">"zsh"</span>,</div><div> <span style="color: #859900;">"terminal.integrated.fontFamily"</span>: <span style="color: #2aa198;">"SauceCodePro Nerd Font"</span></div></div></div><div><br /></div><div>SauceCodePro is my preferred font currently, being the <a href="https://www.nerdfonts.com/" target="_blank">Nerd Fonts</a> version of Adobe's <a href="https://github.com/adobe-fonts/source-code-pro" target="_blank">Source Code Pro</a> font.</div><div><br /></div><div>Once you save the settings.json file, the terminal (if visible) should immediately update and your symbols should be visible. This of course assumes that you have a working font with symbols already installed and that you specify it correctly on the second line.</div><p></p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-48135366397471806652016-02-16T00:11:00.000-06:002016-02-16T00:11:47.625-06:00Dotfiles Part 3: Wherein Things Remain the SameI spent some time trying to get my dotfiles working with <a href="https://github.com/AGWA/git-crypt">git-crypt</a>. I really tried. However, I kept having problems with it. Bad problems, like accidentally committing unencrypted files to my repo. Undoing that means <a href="https://help.github.com/articles/remove-sensitive-data/">purging and re-writing history</a>, which is a major pain.<br />
<br />
Some of these mistakes were my fault. I thought everything was working after a reimaging and restore from backup, but had failed to re-initialize git-crypt so it didn't know it was supposed to be encrypting or with what keys. However, some were not. One of the problems is that due to the transparent way git-crypt works, a git show will always show you the plaintext changes, so there is no obvious indicator that something isn't going to be encrypted as expected. Git-crypt added a <a href="https://news.ycombinator.com/item?id=7509871">check command</a> because of user feedback, which let's you check to make sure a file is being encrypted on commit. I ran into at least one situation where I checked a file with git-crypt check, and it reported that it would be encrypted, and yet on checkin it was still committed in plain text.<br />
<br />
There is just too much danger of accidentally exposing the sensitive files you are trying to keep encrypted for me to have any confidence in continuing to use git-crypt as a solution. Which is a real shame, because the transparency which is its weakness is also its biggest strength. I was able to easily <a href="https://github.com/madasi/dotfiles/blob/master/bin/dotfiles#L297">add it into the dotfiles script</a> to handle unencrypting automatically which made it frictionless in normal use, at least on the receiving/installing end of things.<br />
<br />
<a name='more'></a><br />
In addition to the problems with encrypting the files, and partly because of them, I also found myself in cases where I needed to update one of my dotfiles quickly in order to be able to finish something I was working on right then. I usually ended up modifying the functioning file instead of taking the time to make the change into the repo and commit it, thus causing the repo version of my files to become out of date which defeats the entire point of keeping them in a repo. <a href="https://github.com/cowboy/dotfiles.git">Dotfiles</a> is smart enough to make a copy of modified files when re-run, however you are still working against the system you have set up when you do this. One of the reasons I did this was that I was modifying files that I needed encrypted when committed. I also did it because it was simpler and more familiar as it had been some time since I last worked on my dotfiles repo and I wasn't sure how up-to-date it currently was or if there were already untracked changes that I had made. So, it was easier and quicker to add more technical debt to the pile and go back to my work than to stop for a minute and figure out the right way to make the needed changes.<br />
<br />
Ultimately, I still need a system to handle these files for me. I haven't decided to move away from <a href="https://github.com/cowboy/dotfiles.git">dotfiles</a>, although I do find its method of handling cross platform compatibility to be less than I had initially hoped for. I do, however, need to find another solution to encrypting my sensitive files. I know of methods for just using PGP to encrypt before committing them, however they would require me to manually decrypt them every time I wanted to install or update them locally, and that is a huge barrier to usage. <a href="https://github.com/AGWA/git-crypt">Git-crypt</a> just worked on decrypting, but the same thing that made it just work there made it just fail too easily when adding the files.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-31771544195351133702015-05-26T01:14:00.001-05:002015-05-26T01:18:57.678-05:00Dotfiles Part 2: I Knew It Couldn't Be That Easy As I discussed in <a href="http://blog.madasi.com/2015/05/dotfiles-more-than-meets-eye.html">the first post</a>, I wanted a place to store my dotfiles where I could easily pull them down onto a new system. <a href="https://github.com/">Github</a> has become a popular place for this. It makes sense. Github is a cloud based git repo that you can easily reach from anywhere you have an internet connection. Git is a <a href="http://en.wikipedia.org/wiki/Revision_control">VCS</a> which makes it easy to track changes to text files, which dotfiles are by definition. A simple git clone on a new system, and you have all your files ready and waiting for you.<br />
<br />
There are, of course, some issues with simply creating a git repo out of your entire home directory. So, many systems have been created which usually use symlinks to point into the repo controlled directory instead. There is even a nice listing available on the <a href="http://dotfiles.github.io/">unofficial guide</a> at github at <a href="http://dotfiles.github.io/">http://dotfiles.github.io</a>. It lists out some bootstrap systems to handle the symlinking and setup. It moves on to some app specific options, for things like shells and editors which may have extensive config and plugins that may be better managed with a dedicated system. Lastly, it covers general purpose dotfiles utilities, which may do more than just symlinking and syncing them for you.<br />
<br />
I've started to figure out how I wanted to manage my dotfiles many times over the years, and never gotten much further than looking at <a href="http://dotfiles.github.io/">this massive page of options</a>, opening up many of them in tabs, and then getting overwhelmed or distracted. This time, however, I was determined to make a choice and start trying to implement it. Even if it didn't end up being the solution I use in the end, I needed to get started at some point and figure out what would and wouldn't work for me. So, I made an initial pick.<br />
<br />
<a name='more'></a> After looking over all of the bootstrap systems, I selected <a href="https://github.com/cowboy/dotfiles">Ben Alman's dotfiles</a> due to their multiple OS support. This is important to me as I have switched between various Linux flavors and now OSX several times in the past. Most of my dotfiles work on either but I have found some pieces that are OS specific. The idea of having one repo for my dotfiles that would work on either OS is very attractive to me.<br />
<br />
I pulled up the repository and read the README to understand what it does and how it works. Then I started looking at all the included files. Since this isn't just a repo for the program, but is actually the repo the author maintains for his personal use, it has the author's personal dotfiles already in place. This is one of the fringe benefits of using github for storing dotfiles, it becomes easy to look at other people's and get ideas for better setups and improvements to what you already have. There are, of course, serious risks to just using other people's configs though, especially when you don't know what they actually do.<br />
<br />
There were a lot of files that wouldn't be relevant for me, and I didn't really want to replace my existing files completely either, so I chose not to fork this repo as a starting point. Instead, I took my own <a href="https://github.com/madasi/dotfiles">empty repo</a> which I had created in the past for use whenever I got around to actually completing this project and cloned it to my machine. I also cloned <a href="https://github.com/cowboy/dotfiles">Ben Alman's repo</a> into a separate directory. This allows me to pick and choose what I want from his repo and copy it over into mine piece by piece. I left the license and README intact for the most part, since I am using someone else's work after all.<br />
<br />
The concept is pretty simple and explained well in the README. One directory for files that get linked into your home directory, one for files that get sourced, etc. Interestingly, there is also a directory for files that get copied. The source repo has only one file here, a .gitconfig. The author explains that this is for files that you will modify on your local system after install because they contain data that is either unique to that system or sensitive and you don't want it in a public git repo. In the case of .gitconfig this can be things like your email address and possibly SSH or API keys. This makes sense, but was also my first wrinkle. Yes, I don't want that stuff to be public (email I care less about, but private keys I certainly care about). However, I also don't want to have to do a bunch of hand edits after I install, especially of stuff like keys, which aren't exactly easy to memorize.<br />
<br />
I got a little further on adding my own stuff and hit my first snag. I copied some things from the author's gitconfig into my own, thinking I understood what it was doing. Well, it broke my ability to git push. So, this is just a cautionary tale to remind you, blindly copying config files is a recipe for a bad time. Thankfully, this is all version controlled now, so I just committed a change commenting those lines out and everything was happy again.<br />
<br />
Now I got into the bash configuration files. These are a big change from my current files, in that they are barebones files that source a directory of config files. This does several things. It lets you break your bash (in my case) config up into logical pieces which makes maintenance easier. It also lets you guard each piece with some logic code that checks the OS, thus letting you have different configs on different OSes. This is good, but also got me worried. Sure, this works because of how bash lets you source multiple files or directories as part of the design. However, is this the only way to have OS specific configs? What about for config files that don't support sourcing like this? I thought, from the initial description, that this script had a way of handling that built in instead of doing it this way.<br />
<br />
I also realized at this point that I can't put all of my bash config into this repo. I have a bunch of aliases that are work specific and that would reveal internal config information that shouldn't go into a public repo for this reason. This might be a problem. I spent some time thinking and looking, and found that I can setup git-crypt to encrypt the files before uploading and decrypt them upon download. That would let me transfer over a key to my new system, and then these files would work, but they wouldn't be exposed while on github. That would work, but only because we can break the bash config into pieces. What about my ssh config, where I have sensitive information as well, like individual host configs that specify username, port number, and ssh key to use? That isn't enough to get in, but is enough to tell you where to start looking and what is needed to get in. Additionally, I'm not aware of a way to source in additional files into a ssh config, although I admittedly haven't looked yet. I'm also not aware of a way to encrypt only certain parts of a file, it is all or nothing from what I have seen so far.<br />
<br />
This is the problem I've come up against so far. My dotfiles now carry many sensitive pieces of information, like api keys, internal api endpoints, etc. These are all pieces of information that I want synced, since they are the exact pieces I don't want to lose in case of a system failure or migration. However, they also cannot go on public github in plain text. Encryption may be one option, if they can live in standalone files, but that may not be the case. A second option may be to look at one of the general purpose utilities that supported using multiple repos as sources, so you can mix publix and private repos. This would mean still having the information in plain text, but on a private repo instead of a public one. This is better, but I'm not sure if it is good or not. Private repo means it isn't open to the world, but you and the admin of the repo will have access to it, as well as anyone who gets in via a security flaw of some sort. We have an internal github at work i could use, but it would still be visible to anyone on my team unless I can lock that specific repo down properly. That works for work related private info, but what about my personal sensitive data? Github only offers private repos on a paid account, so is it worth a fee?<br />
<br />
Another, theoretical option, would be to setup a script system that supports modifying the dotfiles on the fly. This would let you encrypt just the sensitive pieces, then have them decrypted, parsed, and inserted into their corresponding files on install. This gets around needing to source multiple files as a requirement for encryption, however you can't modify the files in place or they would include the secrets if you commited and pushed them to the repo again. This means more complexity is needed, a lot more.<br />
<br />
So, this is where I have gotten to so far. I was just getting started adding my stuff to my new dotfiles repo, and I realized that this approach may not work for me at all. Interestingly, I haven't seen these issues touched on in almost any of what I've read so far on the subject. Either I have much more sensitive data in my dotfiles than other people, or it just hasn't been talked about that much. So, I have my started repo in an unsafe state. If I run the dotfiles script again, it will replace my existing bash profile, but I haven't added my changes into the repo yet because of the sensitive pieces. I'm going to have to think about how to solve this problem and do some more research. I should probably remove the new bash scripts from the repo too, so I don't accidently replace mine.<br />
<br />
If you have any ideas for how to solve this, let me know. Otherwise, I'll hopefully be back when I've come up with an idea, if I don't forget about it entirely in the mean time.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-42692867778939201182015-05-25T20:56:00.000-05:002015-05-25T20:56:21.499-05:00Dotfiles: More Than Meets the Eye Dotfiles are pretty important. Anyone who has used a *NIX OS for a while will recognize the truth of this simple statement. For the uninitiated, Linux, Unix, and BSD (and OSX, which is a variant of BSD) operating systems have many things in common. One of these things is that files whose name begins with a period, like .config for example, are treated specially and hidden from the vanilla file list command, ls. Now, they aren't super secret or anything. A simple -a added to the ls command (making it ls -a) will show them again, they are just hidden by default. Due to the naming and pronunciation (.config would be spoken dotconfig by most people) they have become known collectively as dotfiles.<br />
<br />
The assumption is that they will be system files, usually configuration settings and the like, that you won't work with every day and won't want cluttering up your file listings all the time. Which is a pretty good assumption considering that the things multiply like crazy. These days it isn't just dotfiles, you have entire subdirectories that hold all sorts of things for whatever program uses them. Example, I have a .weechat that holds all the config settings, plugins, and chat logs from my irc client. Many programs use these the way Windows programs use their directory under Program Files. It is a good system, as it keeps the config easily findable in the user's home directory and also keeps the config user specific.<br />
<br />
Since they control the configuration of so many different pieces of your system, the dotfiles become important. They are how your system has been configured the way you like it. How it knows the behavior you want. They become very valuable, and over time, very difficult to replace. Some people spend years making small modifications to files like .bashrc and .vimrc until the resulting config is perfect. However, recreating it from scratch might be impossible as you won't remember what the setting you found on that obscure blog 3 years ago that solved this one issue was. You basically have to start over and tweak slowly until you arrive at another working, but inevitably different, config.<br />
<a name='more'></a><br />
This becomes a real pain point in a few scenarios. One is data loss or wiping your system for any reason. If you didn't have an up-to-date backup that you were sure to configure to include your dotfiles, you just lost all your config and get to start over. In the past I used JungleDisk as a backup solution. (Disclaimer: JungleDisk is owned by my employer, Rackspace. They weren't owned by Rackspace when I started using it (nor was I employed by Rackspace then either.)) When I was using Windows laptops, it worked great. When I was using it on Linux systems it worked ok, but restored with weird permissions. That was ok because I had the data and could fix the permissions as needed. Now, I use a MacBook Pro running OSX. In the three years I've had it, I have already had the hard drive fail once. After my failed drive was replaced, I was given back a freshly imaged version of OSX that was newer than the one I had been running before. I promptly set JungleDisk to restoring my backup. In short order, I had an unbootable system. It took me many cycles of getting it re-imaged and trying to restore my backup before I was able to figure out the culprit files and exclude them from the restoration. It looked like one of the applications (I've long since forgotten which) was overwriting a system library with an older version when JungleDisk restored it. This was a very bad experience. I've since stopped using JungleDisk (for other reasons) and haven't started using an alternative yet, because I'm lazy and apparently like to live dangerously. Most of my data is backed up or recreatable, but it's those dotfiles that I'm worrying about again, especially as I creep closer to the date when I can get a new laptop.<br />
<br />
Which brings us to the other similar scenario of a system upgrade. If you are like me, you use a work supplied laptop for most things. It becomes heavily customized over time, both to account for your personal tastes and also with little things that make your job and working with your work's environment much easier. In my case, I have bash aliases and ssh config settings that simplify logging into production systems, tunneling through bastion servers, remembering what ssh key is used where and what port is needed. The other thing about a work laptop is that it is usually on a replacement/refresh cycle. In fact, mine is due for refresh this year. At this point I've had 3 years to customize it, and now I am faced with starting over and recreating everything. Having gone through this already when my drive died, I am not looking forward to doing it again.<br />
<br />
The last scenario that comes to mind is someone who frequently works on multiple systems. Maybe you use a desktop at work and another at home, or a laptop mixed with some other systems, or something else where you are either frequently finding yourself at a new machine or maybe one that you haven't used in a little while. You'd like to be able to have any tweaks you make on one system be reflected on the others, or if it is a completely new system you would like to get your configuration onto it as quickly and painlessly as possible.<br />
<br />
So, you start looking for ways to bring your dotfiles with you. The ideal scenario for me would be that I could login to a new system, issue a few commands, maybe manually import a ssh and pgp key I maintain on an encrypted drive somewhere, and have my system configure itself, dotfiles and standard applications I use. Some people claim that <a href="https://www.vagrantup.com/">Vagrant</a> is perfect for this, and while it does sound useful it has some downsides. First, it is running enclosed environments inside VMs. This is probably good practice, and provides isolation, but it does not do anything for configuring the base system itself. It would also require a serious workflow change, which isn't a show stopper, but does mean a substantial amount of work to get into place. In short, I want to look into using <a href="https://www.vagrantup.com/">Vagrant</a> in the future, but the problem it solves seems to be a little different than this specific one.<br />
<br />
A growing trend is to store your dotfiles on github. I started down this road, and will go into more details about why, and what problems I ran into, in the next post in this series.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-81646121835662991022012-08-30T16:12:00.000-05:002012-08-30T16:12:41.153-05:00On the Abstraction of the ComputerI have a theory. I've had this theory for a while, and occasionally will subject those around me to a description of it. This is going to be one of those times.<br />
<br />
<br />
First, a little background. I strongly think of the computer as a tool that enables us to perform tasks. Some of these tasks we did before computers were available, like balancing our checkbook, and the computer enables us to perform the task faster or more accurately or in some way better or easier. Other tasks did not exist before the computer came along, like surfing the web. Either way, the computer is simply a tool, and the tasks are really the important part of the equation.<br />
<br />
Really, I think of the computer as a meta-tool, a toolbox or a workshop basically. The applications are the individual tools. I think understanding this idea, in some form, is one of the key factors in whether a novice user progresses to a state where they are comfortable with alternate browsers like Chrome or are stuck in the "Internet Explorer == The Internet" mindset. Recognizing that the internet is the destination, the browser is simply the tool of choice for getting there, and that there is more that one tool available that will work.<br />
<br />
I use the browser in this example because these days it is the most common application that a friend or family member will eventually suggest a replacement for. I really do think that when this happens it is essentially a skill growth check for the user, to use RPG terms for a moment. If they get it, then it usually clicks for them, and they proceed to eventually discover other alternate applications. If they don't get it, then their skill growth is limited, at least in this area, until the next time they encounter this idea.<br />
<br />
So, having laid out the background, let me tell you about my theory. My theory is that sometime in the near future, the act of computing will become abstracted and separated from the actual tool, the computer itself. I've had this theory since before tablets started getting popular, and it keeps getting more convincing to me.<br />
<br />
When I imagine this in my head, I always think of a fictional house, one with multiple computers in it. In this house, in the present time, if I want to use my personal finance software to work on my family's budget, I will go to the computer in our home office, because that is where I have that software installed. If my kids want to play a computer game, they will go to the computer in the game room, because it has the fancy graphics card and nice monitor. If I want to look up some random piece of trivia while watching a movie, I will grab a smartphone or a tablet or maybe a chromebook, depending on what is within reach. I pick the computer to use for each task based on its convenience of access as well as its capabilities (is the app installed here, does it have the needed hardware).<br />
<br />
Imagine that I have 7 computing devices in my house. That isn't that many, I promise. I could meet that with 2 smartphones, 1 laptop, 1 tablet, 2 gaming pcs, and 1 office pc. I didn't even get a home theater pc into the setup. How many of those devices are in use at any given point in time? How much processing power is going unused? It is really inefficient.<br />
<br />
Look at the enterprise IT world, and this sort of scenario is where the growth of virtualization came from. Instead of 10 separate servers for 10 different apps, each of which consumes 10% cpu on average, combine them onto 1 physical server, and let the CPU run at 100%. Or onto 2, let it run at 50%, and you have spare capacity in case one server dies.<br />
<br />
This is the reason I think computing will become abstracted. What if the computing resources in your house were a pool, instead of islands? What if your apps were centralized, and no matter what computer I was at I could access anything I wanted to? That lets me use the most convenient computer, which means I don't have to go upstairs to the office to work on the budget, because I can access that application from the laptop, or in the future, from the tablet or smartphone.<br />
<br />
But what about the gaming machines? What about specialized hardware? Well, I don't have a present day solution, but I imagine that using an On-live type of system, that will also become part of the pool. What if I could sit down at the tablet, and it would use the graphics card from one of the gaming computers (that nobody else was using) to render the graphics for the game I wanted to play? What if any screen, keyboard, mouse (or touchscreen) combination could use any of the computing resources in the house to perform any task you wanted. Then you wouldn't have to worry about whether the machine had the capabilities. Once you had the capabilities in your home computing pool, they could be accessed from any device. Now it would just be an issue of using the most convenient device.<br />
<br />
But, you would still be picking based on the size of the display, whether it has a mouse or a touchscreen, etc. Honestly, I think that will go away too. In my vision, I see a time when any surface can become a display and/or an input device. Sure, you could still have dedicated monitors and mice for those times you really want to play a game and reflexes matter. But for that time you are in the kitchen and want to look up that recipe your mom emailed you, it can pop up on some unused counter space as a display with touch controls. Want to finish a movie while relaxing in a hot bath, it can be displayed on the wall in the bathroom. Essentially, this is a world where every surface could be a display on demand, and the actual computer boxes themselves have faded away to a room where you plug in computing modules to add capacity.<br />
<br />
I think once everything can be a computer, that the computer as we think of it will be an outdated concept. That the idea of having to go upstairs to work on your budget, instead of clearing some space and sitting down at the kitchen table and having it be displayed there, will seem as old fashioned as driving around without turn by turn GPS on your phone with Pandora streaming in the background. <br />
<br />
Combine this with wireless technology, and mobile devices can join the pool and access your resources when you are at home, and leave it and run on their own when you are gone. These would even work for devices like Google Glass, or other future technologies. <br />
<br />
That is my theory. It's really more of a vision. I'm not claiming the implementation details will be correct, but I think that at a general level, it is the direction we are headed.<br />
<br />
Now, why did I decide to share this with you today? I was listening to a podcast today, and they were talking about VDI (Virtual Desktop Infrastructure). This is technology like Citrix boxes that provide virtual desktops and apps to you. Virtual, in that they are run on the centralized hardware, but displayed on your screen. It's the new thin client.<br />
<br />
They were speculating a little about where it was headed, and commented that the future might be one where we no longer bought a computer, but bought a desktop with apps installed instead, and just accessed the same one from wherever we were.<br />
<br />
That got me thinking about my theory that also had to do with computers going away, and that is why I decided to write this down finally.<br />
<br />Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-83933587440965150652012-03-23T16:00:00.000-05:002012-03-23T16:00:07.880-05:00SSL, Tomcat, Android, and keeping my sanityRecently, at work, I was tasked with getting some SSL certs installed and working on a tomcat installation. This was a bit outside my normal duties, as work is rather segregated, and tomcat falls under an application admin's responsibilities, not a server admin's. However, there isn't an app admin available who knows tomcat, so I was given the job by virtue of <a href="http://i0.kym-cdn.com/photos/images/newsfeed/000/158/329/9189283.jpg">competence</a>, and having built the server. For reference, the OS was RHEL 5.7, running Tomcat 6.0.33, and trying to use JSSE for SSL.<br />
<br />
Our normal setup for web servers is apache, sometimes using the Cool Web Stack or something like it, so tomcat isn't something that my team has any familiarity with. Additionally, I built this server, but that was just the OS (RHEL). A third party installed the tomcat application server with grails on top of it, for the purpose of hosting some mobile apps (Android and iPhone) created using their toolkit. They set this up without SSL, and in looking through their available documentation, the only reference I could find to SSL was a single footnote on a document about the security of the system which essentially said, since you asked, of course this should all be done over SSL. Just ignore the fact that none of our documentation or reference implementations bother to do so.<br />
<br />
So, I set about trying to get the SSL cert working, armed with a set of instructions (team standard procedures) for doing so with apache, and a single page from our wiki on setting up SSL in Tomcat. This page was proof that someone had done so in the past, but it consisted of a couple command lines to run, some java source code to be compiled and then invoked, and no reference to the implementation details of telling tomcat to use the SSL cert itself. The command line invocations converted the standard x509 cert we received from our CA and the key we generated when making the CSR into another format, <a href="http://en.wikipedia.org/wiki/Distinguished_Encoding_Rules">DER</a>.The java source code formed a program which would read in the der formatted certificate and key, and convert them into a Java keystore (JKS) formatted file. The instructions were for Solaris, our main OS, and not RHEL which these servers were running. The java program wouldn't compile, because the JDK that the third party installed for use with tomcat seemed to have a broken compiler! I installed a new jdk from the RHEL repos, and found that the source code didn't have any include statements, which also caused problems. A bunch of wildcard based includes later, I had a compiled program and a freshly created keystore. I installed it into tomcat, using some helpful <a href="http://www.mulesoft.com/tomcat-ssl">online instructions</a>, made a mental note that I wanted to come back at some point and find a way to convert the SSL cert without using a custom compiled program, because that seems like overkill for a problem where standard tools should exist, and continued on my way, after verifying that the site was accessible over SSL now.<br />
<br />
I did end up finding a way to convert from the openSSL cert to a java keystore without using a custom compiled java program. After <strong>much, much</strong> searching, I found a tough to navigate site that was stuffed with useful information! <a class="urlextern" href="http://www.herongyang.com/Cryptography/keytool-Import-Key-openssl-pkcs12-Command.html" rel="nofollow" target="extern" title="http://www.herongyang.com/Cryptography/keytool-Import-Key-openssl-pkcs12-Command.html">This page</a> shows how to use the <em>openssl</em> tool to combine a key and a cert into one PKCS12 file. Then, <a class="urlextern" href="http://www.herongyang.com/Cryptography/keytool-Import-Key-keytool-importkeystore-Command.html" rel="nofollow" target="extern" title="http://www.herongyang.com/Cryptography/keytool-Import-Key-keytool-importkeystore-Command.html">this page</a> shows how to use the java (or <acronym title="Java Development Kit">JDK</acronym> maybe) command <em>keytool</em> to import the PKCS12 file into a Java KeyStore file. Thus, we are now able to use two commands where before we used 2 different commands & a custom compiled java program. (In theory, we don't have to perform the conversion to a Java KeyStore format, as Tomcat can be told to use the PKCS12 file directly as a keystore. However, this involves more poorly documented tomcat configuration, and I didn't want to keep pressing my luck once I got everything working. If someone else wants to try for efficiency later on, then they are more than welcome to it.) <br />
<br />
Those in charge of this project then decided I should turn off all non-SSL traffic to the servers. After doing this, they discovered that they could not download the new APK files to their Android phones from the server over SSL. Android was throwing an untrusted certificate error (the kind you expect with a self-signed cert, not a CA issued one) and will silently fail to download files from a server over SSL in this scenario.<br />
<br />
I suspected that the intermediate certs were not being handed out correctly, and was eventually able to prove this with the help of <a href="http://ssltool.com/?action=sslCheckOpenSSL">these</a> <a href="http://certlogik.com/sslchecker/">two</a> sites. Our internal instructions said to import the intermediate cert from our CA into the JKS file with an alias of intermediateca. The <a href="http://tomcat.apache.org/tomcat-6.0-doc/ssl-howto.html">official instructions</a> said to import it with an alias of root. Somewhere else online said to use an alias of intermediate. I tried all of these, as well as combining them all, with no luck. I looked through the documentation, and could find no mention of a specific alias name to use for tomcat to magically pick it up and serve it out.<br />
<br />
I went searching again, and finally stumbled upon <a class="urlextern" href="http://stackoverflow.com/questions/8120690/tomcat-doesnt-deliver-intermediate-certificate-https" rel="nofollow" target="extern" title="http://stackoverflow.com/questions/8120690/tomcat-doesnt-deliver-intermediate-certificate-https">this question</a> on stack overflow. This was the same problem I was having, so I tried the solution, but ran into problems. They placed the intermediate cert into /etc/ssl/certs, then ran the command to create the PKCS12 file with an additional flag <em>-chain</em>. RHEL doesn't have an /etc/ssl/certs, so I searched, and found the equivalent at /etc/pki/tls/certs. I tried placing the intermediate cert there, and running the command with -chain added, and got errors because the cert wasn't found. I then went looking to see what options could be passed to the <em>openssl</em> command, and found the -CAfile and -caname flags. Using these, I was able to use the -chain flag and eventually get a Java KeyStore that caused tomcat to serve out the intermediate cert correctly.<br />
<br />
<br />
After some experimenting, I finally isolated what creates a working keystore. The -chain flag with the openssl command is the critical key. Combine this with -CAfile to create the PKCS12 file with the intermediate cert included. The -caname flag ends up to not be needed at all. Importing the intermediate certs into the JKS (Java KeyS<span style="font-family: inherit;">tore) file with an alias, doesn't matter at all. (It doesn't break anything to have them there, bu</span>t it also isn't needed for it to work.) Counterintuitively, the working JKS file will only show to contain one cert when viewed with keytool -list. <br />
<code></code><br />
<pre class="file" style="font-family: inherit;"><code>[root@fido sslcerts]# keytool -list -keystore fido.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
tomcat, Mar 22, 2012, PrivateKeyEntry,
Certificate fingerprint (MD5): 83:F5:A4:7D:2A:39:35:FB:8B:41:B7:34:B5:97:45:92</code></pre><br />
I was able to verify with both of the <acronym title="Secure Sockets Layer">SSL</acronym> checking sites above that this file will serve out the intermediate certs correctly. On Android, it no longer throws the untrusted cert error, and now silently validates. So, now, we have a new working procedure that only requires the same files we were downloading from the CA <span style="font-family: inherit;">before, and uses two commands to turn them into a working Java KeyStore that will serve out intermediate certs correctly</span><br />
<div style="font-family: inherit;"><code> </code></div><code> <pre class="code" style="font-family: inherit;">openssl pkcs12 -export -inkey fido-2012-03-15.key -in fido-2012-03-15-cert.cer \
-out fido_key_cert_chain.p12 -chain -name tomcat -CAfile fido-2012-03-15-interm.cer
keytool -importkeystore -srckeystore fido_key_cert_chain.p12 -srcstoretype pkcs12 \
-srcstorepass changeme -destkeystore fido.jks -deststoretype jks -deststorepass changeme</pre></code><pre class="code" style="font-family: inherit;"><code></code> </pre><pre class="code"></pre><pre class="code"></pre>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com3tag:blogger.com,1999:blog-9896068.post-72923588871940641162011-11-22T10:51:00.000-06:002011-11-22T10:51:43.297-06:00Citrix Receiver on 64-bit Arch LinuxI recently made the switch from Ubuntu to Arch Linux (with E17) on my workstation at work, and I am now in the process of getting all my apps setup and working again. Pretty high up on that list was the Citrix receiver. Work is a largely Windows based setup, with Exchange for e-mail. I actually prefer a few things about the newer versions of Outlook, when compared with most of the Linux clients I have tried in the past.<br />
This means my choices boil down to:<br />
<ol>
<li>Get Outlook running under Wine (yeah, that'll work out real well) </li>
<li>Run Outlook in a Windows VM (I have this setup, but prefer not to run the VM at all times, it is a real memory hog)</li>
<li> Run a separate computer with Windows on it, just for Outlook (not going to happen)</li>
<li>Get the Citrix receiver working, and use the Citrix version of Outlook that we make available</li>
</ol>
I chose #4. The problem here is not the linux part, install packages exist for the receiver on linux. The problem is the 64-bit part. Out Citrix server only hands out a 32-bit installer. I went looking, and while Citrix has started offering 64-bit installers now, they only come in .deb or .rpm. There isn't a .tar.gz package like there is for 32-bit. Arch linux, of course, does not use .deb or .rpm packages.<br />
<br />
I had the receiver installed on Ubuntu, and it worked mostly. It worked fine, but I could not run the manager app to change the settings, which meant that I could never map my local hard drive to show up as a drive in the software run on Citrix. For e-mail, this meant I could not save or send attachments, unless I used a USB stick for them, because the default settings would auto-mount USB devices.<br />
<br />
I found <a href="https://wiki.archlinux.org/index.php/Citrix">these instructions </a>on the Arch wiki, and followed the manual install instructions. Probably because I had already downloaded the package from Citrix, and sunk some time into getting the installer to run, so I didn't want to take the easy route out and use a pre-built package now.<br />
<br />
Just to get the installer to run, I had to do some research, and finally figure out that the "no such file or directory" errors being thrown by echo_cmd were because I only had the 64-bit glibc libraries, and I needed to install the lib32-glibc from the multilib repo as well.<br />
<br />
I followed the instructions on the wiki, making modifications as I went because my install of the receiver was in a different directory, and got a working install, except that I had problems getting firefox to see the Citrix plugin for some reason. I also was not able to get the manager app (wfcmgr) to run, despite the wiki article explicitly saying it should. I was getting this error:<br />
<br />
<code>/opt/Citrix/ICAClient/wfcmgr: error while loading shared libraries: libXm.so.4: cannot open shared object file: No such file or directory</code><br />
<br />
I did some more digging, and found the 32-bit library package from the AUR containing libXm.so.4, aur-lib32-openmotif, installed into /opt/lib32/usr/lib directory, instead of into the /usr/lib32 directory where the wfcmgr program was trying to find it.<br />
<br />
One way to fix this is with a simple<br />
<br />
<code>sudo ln -s /opt/lib32/usr/lib/libXm.so.4 /usr/lib32/libXm.so.4</code><br />
<br />
however, I chose to modify the PKGBUILD to put the libraries in /usr/lib32 with all the others, in case a program went looking for one of the other openmotif libraries in the future.<br />
<br />
I also figured out that my problem getting firefox to see the plugin was that I checked whether the plugin was setup with<br />
<br />
<code>sudo nspluginwrapper -l</code><br />
<br />
using sudo here, because the wiki article showed the install command, <code>nspluginwrapper -i</code>, being run as root. This showed the plugin to be in place already:<br />
<br />
<code></code><br />
<code>/root/.mozilla/plugins/npwrapper.npica.so<br /> Original plugin: /opt/Citrix/ICAClient/npica.so<br /> Plugin viewer: /usr/lib/nspluginwrapper/i386/linux/npviewer<br /> Wrapper version string: 1.4.4-1</code><br />
<br />
It took me too long to realize that this plugin was not system-wide, but was root-specific, and that I needed to do this as my user instead. I checked and it was not setup for my user, so I used the -i command to install it, and restarted firefox. It is now detected, which means I don't have to skip past the install prompt from the server when the silent-detection routine fails to find the plugin on my system.<br />
<br />
In the end, I have a newer version of Citrix Receiver installed and I was able to setup my home directory to be mapped as a drive to be seen inside the Citrixed apps. I do not have the USB device support, because the installer can't figure out Arch's system for managing services, but I don't think I'll miss that.<br />
<ol></ol>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-58887024457817669432011-08-16T17:32:00.001-05:002011-08-16T17:34:16.482-05:00Sign Criticism<div><p>Driving home after work today, I saw one of those little signs stuck in the ground at a corner that said "Get Customer Leads" and had a phone number. The two thoughts that went through my head were: 1)  If you are so great at generating customer leads, then why aren't you contacting me instead of expecting me to call you?  2) If I were to call you,  would I end up on your list of customer leads that you sell to others? </p>
</div>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-32843838275993866182009-10-09T23:10:00.002-05:002011-03-04T07:28:09.444-06:00Proprietary Drivers Lead to Hardware Duplication<p>I'm your typical geek, obsessed with gadgets and technology, so I could probably burn through almost any size budget, buying cool things to hack on and/or play with, without even blinking. Conversely, I'm young, and still working my way up the ladder, so my income is definitely limited. Finally, I'm married, and have two young children, so I have far better things to spend my money on than the new 8.02.11xn enabled light bulb. (Bonus points if it also speaks wireless DMX)</p><p>So, the net effect of this, is that when I do get something new to play with, I have to choose carefully, and try to get the most tech bang for my dollar. Case in point, the first time I've gotten a GPS device to play with it is in my corporate supplied Blackjack 2 smartphone. Now, I've wanted a GPS for a while. <a href="http://en.wikipedia.org/wiki/Geocaching">Geocaching</a> looks like a lot of fun, and is something I've wanted to try for a while. I'm also interested in <a href="http://en.wikipedia.org/wiki/Wardriving">wardriving</a> as well as helping out <a href="http://openstreetmap.org">OpenStreetMap.org</a></p><p>However, a Windows mobile smartphone without wifi isn't really the best tool for any of these activities. However, it has GPS builtin, and it has bluetooth, just like a typical bluetooth GPS dongle. And, thanks to a hack I found online, the internal GPS can be accessed directly on a COM port, instead of only through the Windows Mobile APIs. In an ideal world, I could just read the GPS NMEA data over bluetooth from my laptop, and use it the same as any other GPS dongle. However, because of the proprietary nature of everything involved, this isn't an option, and I'm forced to buy a different GPS receiver if I want to use the GPS with my laptop.</p><p>This is all a software issue though. The technical capability clearly exists in the devices. However, the drivers don't allow for it. Drivers, written by hardware companies, who don't want anyone else to know the intricacies of how to interface with their hardware, as that is "proprietary knowledge" and a "trade secret". Now, there is some legitimate concern here. If you know how to talk to a piece of hardware, and can map inputs to outputs, then reverse engineering, especially the two team, clean room style, becomes much easier. However, many hardware companies write really lousy drivers, full of bugs, and lacking many features.</p><p>This is a fight faced by the Linux and BSD communities since the beginning. There isn't enough market share for most hardware manufacturers to create their own drivers for open source operating systems. However, despite offers from the community to do the development, the manufacturers are also unwilling or unable to release any sort of technical specifications, or provide any sort of support at all. Which really created the old school Linux mentality of finding something you wanted to have work, then hacking at it until you had created a working driver for it. The end result here is that the consumer loses, and the manufacturers don't see the problem, when I have to buy a separate GPS device because my laptop can't use the one built into my smartphone.</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-25710850367590206402009-01-30T21:59:00.004-06:002009-10-09T23:15:16.578-05:00Metrics Can Lead to Poor Customer Service!<p>We eat far too much fast food. Unfortunately, it is too attractive to us. It is quick, relatively cheap, and simple. Also, we can usually pick up something that the kiddos will eat, or at least not complain about. I think the real kicker often is the quick, and that it doesn't mean having to clean dishes around the house, which is all too big of a plus for us right now.</p><p>I was realizing today just how low my expectations have become. No matter where I'm ordering, no matter how straight-forward the order, I expect that there will be at least one mess up. If it's just that they forgot to include straws for the drinks, I'm happy. If it is that the ignored a special order request, despite it having been heard, and ticketed properly, I'm unsurprised. And if it is the failure to include some add-on condiment (like sour cream at Jack in the Box) that I paid for, I'm only mildly annoyed. Which is really pathetic. I should expect to get my food the way I ordered it, all of it. If it doesn't get entered properly when I order it, I understand that, although I get annoyed if it happens after I've repeated myself and corrected the order 5 or 6 times. But, when the order gets taken and entered correctly, but made incorrectly, that is either laziness, carelessness, or sloppiness. And yet, I've come to accept and expect it.</p><p>However, the point of this rant is about the other annoying practice most fast food restaurants have gotten into at the drive-through window, asking you to pull into a parking space and wait for your food. I find that this also rarely perturbs me, although it really gets under my wife's skin. When I've ordered something that I know takes a bit longer to cook, and there are several cars behind me, then I have no problem with pulling into a space to wait. My only annoyance at is is that I know I won't be asked about any condiments I would like, and I won't be able to point out any issues I find right away, like I could at the window. However, whenever I get asked to pull in when there are no cars behind me, then I blame metrics. Or, an even better variant that I experienced today: this was one of the places were you pay at the first window, and get your food at the second. There were about 3 cars behind me when I paid, and the car in front of me had already left the second window. I was asked to wait at the first window, and not pull up until I was told that my food was ready.</p><p>Clearly, both of these requests are intended to minimize time spent at the delivery window. Having worked in a fast food restaurant in college, I know that things like wait time and time at window get tracked by management. Just like we were warned when the "secret shopper" would be stopping by, and everyone knew who he was, so his order always exceeded the minimum standards for amount of ingredients, the employees are going to do anything they can to boost these metrics if there is either reward or consequence attached to it. Whether it is average wait time, average time at window, or even max time at window, the drive is to improve the measured metric, even at the absolute expense of the customer experience. This isn't all that different from <a href="http://www.fastcompany.com/magazine/132/made-to-stick-curse-of-incentives.html">backfiring incentives</a>. Of course, this doesn't apply only to the fast food industry, but is a danger with any metric in any industry. If the metric becomes the be-all end-all, then it's entire purpose has been defeated. This is on my mind right now, because I am now part of setting metrics both for myself, and for the department I'm currently responsible for training and overseeing.</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com1tag:blogger.com,1999:blog-9896068.post-59973262013871764812009-01-08T18:10:00.005-06:002009-01-08T19:47:19.515-06:00Space, the Forgotten Frontier<p>I've always been a huge fan of the space program, so the existence thereof has never been something I thought needed justification. However, I find that in my daily life, many of the people I run into hold the opinion that the space program is a huge waste of money with absolutely no benefits whatsoever. I've mostly written this off as ignorance, but still been bothered by it. However, a recent trip to Florida made me consider how poor a job of marketing itself NASA has done.</p><p>On this trip, we spent several days at Walt Disney World, then stopped by NASA's Kennedy Space Center, before visiting the Atlantic Ocean (as we had several people, our kids included, who had never seen it before) and then heading back home. The difference between Disney World and NASA was one of the most dramatic I have ever seen. Granted, I was giving Disney World pretty high marks. I had only been the once before, as a young child, so my only memories of it were faint, and glossy with childhood nostalgia. In addition to that, I had recently read Cory Doctorow's book <a href="http://craphound.com/?p=147">"Down and Out in the Magic Kingdom"</a>, so I was reliving the book as I walked through the park. So, I spent the entire time at the park annoying my wife with comments about how efficient I found things, how well designed and themed areas were, how well things tied together, how this area was important in the book, etc.</p><p>We then visited NASA, where it felt like we had found a run-down back-country tourist-trap/ghost-town. The place was almost deserted, and looked like it was running on a skeleton crew as well. There was one ticket lane open, but the way they were set up, you had to walk up to each one to see if it was open or not. You were largely left to yourself to explore the exhibits, and read the plaques, with little or no direction from the staff. It was overall a horrible experience, and one the kiddos did not enjoy at all. While I can't blame them for that, it did hurt me, their geek dad/space buff, that they weren't as fascinated by the shuttles and space hardware as they were by the princesses and talking animals a few days before.</p><p>Even for me, the experience was rather disappointing. I wasn't engaged at all, I didn't learn anything I didn't already know. Yes, there was a shuttle on the launch pad, but you couldn't really see anything more than the top 6 inches of the External Tank from the observation tower. Granted, I had fussy kids to keep me busy, so I didn't get to see some of the exhibits I wanted to. Additionally, this visit was near the end of our trip, and everyone was gearing up for the couple days in the car it would take to get home. Still, it was the sheer underwhelmingness of the experience that impressed upon me the most.</p><p>We are talking about the space program here. Space Shuttles with Solid Rocket Boosters and External Tanks, some of the most advanced technology in this country, in the world even. Yet, I couldn't really get into it. If only NASA had the Walt Disney Imagineering department working for them, how might the Space Center be different? How might the public's impression of the Space Program as a whole, be different? Space travel has become routine, so it only makes the news when something goes horribly wrong. At any given point in time, how many people around you could tell you how many people are currently in space?</p><p>NASA has done very little that I have seen to work on its public image. So, consequently, very few people know what the agency's mission is, or what its goals are. Even fewer people seem to understand the impact NASA has on life here on earth. The common opinion is that money spent by NASA is either a) turned into smoke when a rocket launches or b) launched into space, and has no impact here on earth, that it is being completely wasted. So, let me throw some numbers your way, ok?</p><p>NASA's requested budget for 2007 was $17 Billion. <a href="http://www.thespacereview.com/article/898/1">(source)</a> In isolation, that is a large number. Certainly more than most of us will ever see in our lifetimes. However, when we start to add in some context, how does it look? The recent bailout package was approved for up to $700 Billion. The auto industry is asking for $25 Billion in bailout loans. The US national budget for 2007 was $2.784 Trillion, so NASA's slice of the pie was 0.58% of the national budget. Finally, social programs (the place most people say the money going to NASA should be spent instead, totaled $1.581 Trillion. So, for every $1 spent on NASA, we are already spending $98 on social programs. Finally, for every dollar spent by the government on R&D in NASA, it is estimated the government earns $7 in personal and corporate income taxes. <a href="http://www.thespaceplace.com/nasa/spinoffs.html">(source)</a> Let me repeat that, <b>the government brings in $7 for every $1 it sends out to research projects inside of NASA</b>.</p><p>The last piece of the puzzle, is what we call spinoff technologies. Things that had their beginning in the space program, but ended up in the public, usually in very different forms. These are the things I think NASA could really do a better job of publicizing, although they have put together a <a href="http://www.nasa.gov/externalflash/nasacity/index.htm">very neat site</a> to showcase some of them. Many of them are exotic sounding things you'll probably never encounter, such as advanced welding systems and magnetic liquids. However, there are a few that are very important to many people. <ul><li>Infrared In-ear thermometers, a parent's best friend.<li>Cordless vacuums, like the DustBuster.<li>LED lights.</ul></p><p>Of course, there is also the obvious, telecommunication satellites. The things that make so many of your phone calls, internet usage, and tv watching possible. Not to mention the many advances in weather monitoring and forecasting, useful both to those in the path of a hurricane, and those with a farm in the midst of a drought. These are things that the space program brought to life, usually inadvertently. It can also be argued that the current age of advanced technology is largely thanks to the space race era, and the many engineers and scientist who worked on the projects, and many others who may never have been involved with the space program directly, but were inspired to go into science and technology because of it. Because ultimately, to me, that is what the space program is all about. It is about solving problems, about exploring, and about learning new things and adding to the sum total of human knowledge, and inspiring new generations to do the same things.</p><p>I am a Christian, so I'm not one who believes our only hope for survival is in colonizing other planets, and eventually other solar systems. However, I do believe that the future of the US of A, as a country, relies on our being leaders in technology and science, areas we are quickly falling behind in. We once were the manufacturing powerhouse of the world, but now that is all being shipped abroad. We now import much of our food, more than we really have to. If we stop leading in innovation, what will we excel in, as a country?</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-57922785594385809382007-09-21T03:57:00.000-05:002007-09-21T04:09:37.088-05:00My Webpage Addiction<p>So, yesterday, I had a recurring issue happen to me, which causes me much frustration. I lost about 100 webpages in Firefox. Yes that's right, I lost webpages. Ok, let me explain what I mean.</p>
<p>First, I have a problem. I collect webpages. I think that's about the only way to describe it. I will find a new webpage through any of a variety of means. I might open 15 pages from search results when I'm trying to troubleshoot a problem, or I might follow some links from various webcomics, or be sent something by a friend. Then, for whatever reason, I don't just read the page and move on. Instead, I save it. Either because I don't have the time to read it entirely right now, or I want to try it out later, or I think it's great reference material to keep around, or I want to send it to someone else. So for whatever reason, I want to keep it around.</p>
<p>Because of this, I've got hundreds of untamed bookmarks, synced between browsers, in theory, whenever Google Browser Sync works properly. (It synced at the beginning, but I'm not sure about lately.) I've also got a del.icio.us list that is 234 pages long at 10 items per page. And lately, I've become known (read as:ridiculed) for having as many as 150 tabs open in several windows in Firefox at any given time. Yes, starting Firefox is a 15 minute ordeal.</p>
<p>The strategy of just leaving the tabs open, works best for things I just haven't had time to read, or want to try in the next day or two. In theory. So, pre-Firefox 2.0, I use an extension to save my tabs, and restore them in case Firefox crashed on me. It also had the nice extra of saving the last 2 sessions, so if for some reason it didn't load my tabs properly, I could go back to the previous saved version. (Just don't close Firefox after not having your tabs load, or you will lose the good saved session information.) Firefox 2.0 came along, and with it, <a href="http://wiki.mozilla.org/Session_Restore">integrated session saving</a>. This has been nice, and is more robust that the extension I relied on before (which is no longer compatible with current versions). However, it does not have the ability to re-load the session, or to load an older session.</p>
<p>Why does this matter? Well, I usually have the most tabs open on my laptop. And, for whatever reason I haven't tracked down yet, this is also my least stable installation of Firefox. I deal with a few crashes each day, on average. My laptop also has a quirk dealing with its Wifi, often requiring me to powercycle the radio before it will connect. Thankfully, it's just a key press on my laptop. However, if I fail to realize I'm not connected before I launch Firefox, or if (as happened this last time) Firefox gets auto launched because it was open when my previous X11 session ended, then all my tabs come up unable to connect. Once this happens, I have to go connect to the net, then reload each tab by hand. If I don't, then they will be saved as blank tabs, and Firefox will forget what site was previously loaded in them. Once this happens, they are usually gone, because I obviously do not remember what I had loaded in 150 different tabs. In addition, many of the tabs have been open for weeks, or longer, so they will no longer be listed in my browser history.</p>
<p>So, today, when my laptop decided it wasn't going to resume from suspend anymore, but would instead hang on booting, I managed, through a serious of events, to have this happen. This is probably the tenth time or so I have lost my tabs. It really, really ticks me off. So I decided to see what I could do about it. Unfortunately, the answer is, if I had known what to do, and had been quick enough, I might have been able to save them. Maybe. Depends on how far gone they were once I got the rest of the system functioning again. (How does Gnome just forget you had a notification area on your taskbar?)</p>
<p>There is an <a href="http://wiki.mozilla.org/SessionRestore/API">API</a> available, with something vaguely like what I would want, mentioned as a potential use case. However, my coding skills are infantile, and creating a program to utilize this would be way beyond me. I did, however, come up with a stop gap measure. I added an entry t o have logrotate copy the session saving information for me (which is updated routinely as Firefox runs). This means, in theory, that I will have snapshots from several days available, and hopefully anything added after the snapshot was taken will be recent enough to still be found in my history.</p>
<p>Yes, I do realize that the real solution is to change my habits, and stop leaving so many tabs open. However, this is my stopgap solution. Time will tell if it works or not. May I never need to use it.</p>
<p>My configuration entry for logrotate:<pre><code>/home/madasi/.mozilla/firefox/8ryaickb.default/sessionstore.bak
/home/madasi/.mozilla/firefox/8ryaickb.default/sessionstore.js {
rotate 7
daily
copy
dateext
compress
missingok
}
</code></pre></p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-67012646955435757002007-09-15T05:11:00.000-05:002007-09-15T07:54:15.413-05:00Kiddo Quote<p>So, a while back, we had bought the kids a small treat of some kind, a snack size bag of cookies I think. So, on my day at home with them, I gave them each one, then closed the bag and set it down. So, several minutes later, my daughter walks into the room where I was folding some clothes, hands me the bag, and asks if she can have another one.</p><p>I pick the bag up, and being looking at the nutritional information on it. She asks me what it says, so I start reading some of it aloud. Things like serving size, calorie count, etc. She gets this very serious, disappointed sound in her voice, and says, "Oh, man."
<br />"What?", I asked, "Do you know what any of that means?"
<br />"It means I won't get any more."</p>
<p>So cute I almost cried.</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-28077723124412133222007-09-14T10:15:00.000-05:002007-09-14T10:16:01.954-05:00Bug Affliction<p>A few weeks ago, I lost my swap space again, exactly like last time. Once again, a resume from hibernate failed after a routine fsck check ran. And once again, I missed the fact that my swap was gone until later when I noticed the computer failing to hibernate due to lack of swap space.</p><p>Since this was the second time it happened, I was sure that it wasn't due to an accidental run of mkswap. Not that I was doubting it after the first time, but you wonder if maybe some obscure option you hit in the GUI might have triggered it behind-the-scenes or something. This time, I was certain.</p><p>I was also thankful I blogged the instructions for fixing it the first time, as it save me a lot of research this go-round. Not that I didn't research it, I just did so after fixing it.</p><p>Thankfully, I didn't come up empty handed. I found <a href="https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/90526"><b>Bug #90526:</a> Routine fsck deactivates swap, changing UUID</b>. Yep, that's it exactly officer. The clue this time was that I saw the fsck happen as my laptop booted, and realized that the last time seemed to be in close temporal proximity to a fsck as well. It's a confirmed bug, with no indication of a time frame for a fix. However, it's always nice to know that you aren't crazy, and that your issues have already been documented.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-90918621285220137492007-05-25T03:35:00.000-05:002007-09-21T04:31:19.487-05:00Hibernate once again!<p>So, the <a href="https://launchpad.net/ubuntu/+bug/66637/comments/23">instructions</a> I mentioned in my <a href="http://madasi.blogspot.com/2007/05/ubuntu-704-dell-e1505.html">last post</a> seem to have cured my swap & hibernation issued on my laptop for the time being.</p><p>The short version was:<br/><br/><code><pre>1, determine your swap with 'fdisk -l'
2, do mkswap on your swap partition - RECORD THE UUID WHICH THIS COMMAND OUTPUTS
3, now use this UUID to put into fstab and resume files...RESUME=UUID=<the-swap-partition-uuid-from-vol_ID>
should go in /etc/initramfs-tools/conf.d/resume
4, update-initramfs -u
5, reboot normally after this finishes</pre></code><br/><br/>What caused my problem in the first place is still very much a mystery though.</p><p>Reading through the threads, it appears that the general idea is that either the hibernate or the suspend failed for an unknown reason. This left the hibernation image stored in the swap space, which can apparently cause the swap space to be considered corrupted, and therefore not be mounted.<br/>However, most of the comments indicate that in this scenario the UUID issues, which are what the above steps fix, only appear once the user has manually run mkswap, which changes the UUID of the swap partition. If, as Ubuntu defaults to doing, you are mounting partitions by UUID, and not just label, then once changed, mounting cannot succeed until the fstab is updated, manually.</p><p>However, in my case, I was already getting UUID errors when running <code>swapon -va</code>, which was why I began tracking down the problem in the first place. I certainly didn't run mkswap before the error appeared, so why did either a) the UUID change, or b) the symlink for the UUID disappear? That is half the mystery, and what caused the hibernate to begin failing is the other half.</p><p>However, for now it is functioning fine, so I probably won't be spending much time digging further into it, unless I have more problems with it in the future,</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-79274180594866499992007-05-24T02:49:00.000-05:002007-05-24T03:47:52.724-05:00Ubuntu 7.04 & a Dell E1505<p>Ok, so I've had my laptop for almost 6 months now, and had Ubuntu running happily on it for most of those. I'm intending to put together a detailed article with the hardware specs, and all the little tricks I had to follow to get things working properly, but haven't made a lot of progress. However, I still intend to, since I haven't found them collected together in one place anywhere else.</p><p>So, I started with Ubuntu 6.10 (actually 6.04, but I wiped that and installed 6.10 within a week), and upgraded to 7.04 the first day it was officially out. The upgrade improved several things, and made several others worse. I've also got a few annoyances and the like hanging around from 6.10 that probably wouldn't be present if I did a fresh install, but I've got the system so heavily tweaked that I'm highly resistant to the idea.</p><p>I'll go into more detail later, but for this post I wanted to highlight an issue that showed up yesterday and that I just got fixed, hopefully. Hibernating & suspending have mostly worked out of the box for me, without any tweaking in 6.10. I'd occasionally have issues where it would refuse to hibernate, with the log just showing that powermanager decided to resume after one of the processors (it's a dual core chip) had shutdown, but before it had written the ram to disk. Depending on the exact timing, this either led to it coming back up and just not suspending, or it would lockup, never finishing suspending or resuming. Other than this, and occasionally resuming with the LCD still off, which I fix by suspending-to-ram, then resuming, it hadn't given me trouble. It worked well enough I normally hibernate, and only actually shutdown or reboot a couple times a month. However, it wasn't quite reliable enough that I'm comfortable just shutting the lid and relying on it to suspend automatically.</p><p>The upgrade to 7.04 seemed to have increased the reliability, if anything, with one odd side-effect. Almost every time I'd resume, it would play its little siren sound, and display a popup notification telling me that suspend had failed. Of course, considering I only get the message after a successful resume, I found the message odd, but haven't bothered to investigate it at all.</p><p>So, yesterday, the only change I made was to install some updates, which were for vim-tiny, vim-common, and samba. These shouldn't have had any effect at all on suspending. So, I finish up at work, hibernate, and go home. At home, I start it up to check something, and am surprised to find myself at a login screen. I logging, and check the syslog for clues. I don't see the normal saving RAM to disk messages, or the normal resume messages, and in their place is a message I don't remember seeing before, which comes near the end of logging before the startup happens.
<code>PM: suspend-to-disk mode set to 'shutdown'</code>
Unsure if this is relevant or not, I finish what I am doing, and hibernate again. Sure enough, once I get to work the next night, I'm greeted upon bootup with a login screen. I google for this string, and find some of the source code, and an old message or two about failed hibernations screwing with their swap partitions, but nothing catches my eye.</p><p>A while later, I notice that the computer has basically frozen. Now, it didn't catch my attention at first, as launching firefox puts my computer out of commission for about 5 - 10 minutes usually, as it loads the 200 something tabs I had open last time I shut it down. (No, that's not an exaggeration, there are currently 217 tabs spread across 3 windows. There used to be about 3 times that. You should see my del.icio.us list.) However, this time it is much longer lasting, and I notice that according to the system monitor, which is only updating sporadically, there isn't much network activity, and all the cpu time is I/O wait. This isn't typical behavior. I finally switch over to one of the text consoles, login, and check top, since I can't get any response in X. Looking around, I suddenly notice something, there is about 12MB of RAM free, and 0KB of swap space. Oh, that's because there isn't any swap on the system. WTF? No swap? This system has 1GB of RAM, and 2GB of swap. Why do I have no swap? I try to login to a couple more virtual consoles to investigate and get my swap activated, but by this time the system is so sluggish that the logins time out before ever actually prompting me for a password. I hard reboot, and start googling from a work computer in the meantime.</p><p>The problem I run into eventually is that my swap is still listed in /etc/fstab, yet if I run swapon -va it says it cannot find that UUID. I find some interesting information in <a href="http://ubuntuforums.org/showthread.php?t=287962">this post,</a> however, their commands don't work for me. I then find a link to <a href="https://launchpad.net/ubuntu/+bug/66637/comments/23">a comment for </a> bug #66637 from 10/31/2006, which does the trick. Just remember all these commands need to be run as root, or prefixed with sudo. After a reboot, my swap is loaded properly, and the system successfully resumes from a test hibernation once again.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-44085516660300569792007-03-29T07:34:00.000-05:002007-03-29T09:16:07.079-05:00Mafiaa<p>Ok, so news like <a href="http://techdirt.com/articles/20070328/150008.shtml">this</a> makes me grin. While I understand that downloading copyrighted music is a <b>civil offense</b>, I really think suing your customers is a stupid business strategy. Although, I also think adding "Don't even think about pirating this you evil person you!" messages to CDs, and liner notes, and guilt trip clips before movies is just as insulting to the consumer, especially since you only see these after you've handed over your money, placing you firmly in the classification of paying customer.</p><p>However, even once we get over these issues, the way the RIAA primarily, and to a lesser extent the MPAA, are going about these lawsuits is ridiculous, and in many cases fraudulent. They threaten people with lawsuits to get upfront settlements, file anonymous lawsuits by the hundreds, and even tried to scold a college for not keeping IP address assignment logs because "Don't they know they are important?". Well, they may be important to the RIAA, but to the college they hold no special value, and consume valuable & expensive resources to keep around, so the logical choice is not to keep them unless they have a specific need for them.</p><p>I'm all in favor of people fighting back on these lawsuits, whenever they have the means to do so. Progress is finally being made too. With several cases having forced the RIAA to pay the defendant's legal fees, this may embolden lawyers to offer their services in exchange for any fees they can recover if they win, which would help more people get qualified representation. In <a href="http://recordjackethistorian.blogspot.com/2006/05/how-much-is-single-track-worth.html">another case</a>, the defendant, who was ruled against on the main issue, has successfully gotten the court to question the ridiculous per song damage figures spouted by the RIAA.</p><p>Lastly, if you haven't already heard the joke, the Recording Industry Association of America (RIAA) and the Motion Picture Association of America (MPAA) should join together to form the Music and Film Industry Association of America (MAFIAA).</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-34895433959008501742007-01-25T20:21:00.000-06:002012-08-30T12:06:10.282-05:00OTM ExtensionsThere will of course be lesser used variations of the OTM interval.<br />
<br />
The MTO interval is for when a device is un-impressive at first, but you later discover something about it that really changes your opinion of it. This will be a very subjective measurement, and therefore be much more difficult to quantify and measure.<br />
<br />
There will also be a MTOTM interval, measuring the time from getting an un-impressive device, discovering something cool about it, and then deciding that the device is still pretty lame afterall.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com1tag:blogger.com,1999:blog-9896068.post-4579056182467604992007-01-25T15:09:00.000-06:002007-01-25T16:05:27.536-06:00This Post has an OTM rating of 2 days.So, in conversation today, we created the OTM rating scale for new technology. This is how long from the time of purchase it takes for your opinion of the item to go from Ooh! to Meh! This should be a required statistic listed for all new devices.
This was discovered while playing with a POS at work. (That's Point-of-Sale, keep it clean!) I discovered an unlocking lever on the monitor, and said Ooh! I then played with it, and discovered that it was already at the best position, the others were horrible, prompting a Meh!. We then observed that it took about two seconds to go from Ooh to Meh, and the OTM rating was born!Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-3788854956424009292006-08-29T10:31:00.000-05:002006-09-12T23:43:20.585-05:00Christmas Decorations<p>Ok, I realize that the whole "Christmas is over-commercialized" thing is itself commercialized at this point, and that nobody cares anymore, but I just want to note it down somewhere, because I say I'm going to every year.</p>
<p>Last night, August 28, was the first time I saw Christmas stuff for sale in a store. Granted it was Big Lots, but they had a decent amount of it out, near the generic fall, and Halloween decorations. They also had the stocking instruction sheets that diagram hot to lay it out on the shelves, so it wasn't just a case of having happened upon some extras they were trying to unload.</p>
<p>So, that's what, only 4 months of shopping time now?</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-1133304985766095832006-07-01T21:42:00.000-05:002006-07-01T21:46:32.726-05:00EULA'S<p>I avoided bringing up the Sony/BMG rootkit fiasco back when it was big news
The reasons were 1) it's a decently technical subject, and if you
understand what I'm talking about, you probably also already knew what is
going on, and 2) many people more technically proficient, knowledgeable, or
eloquent than I have already said quite a bit about it.
I do want to mention the issues inherent in the horrible EULA that
accompanies these CDs, and other computer software.
I dislike these one sided agreements where terms get dictated, and you have
only a binary yes/no choice, usually after you've already paid for the item.
Software isn't the only place these are found though, not by far. Admission
tickets to almost anywhere are pretty bad, parking garage claim tickets,
etc.
Here is the exact text from the back of a ticket from our recent trip to the
zoo.</p><p>"This ticket is issued to Holder as a revocable license which may be revoked
at management's discretion for any reason including Holder's acting in a
disorderly manner or otherwise violating the rules or regulations of the
Zoo. The Zoo shall not be required to issue an exchange or refund for any
reason including inclement weather. Holder voluntarily assumes all risk and
danger of personal injury and all hazards, which are related in any way to
Holder's visit. The zoo and its officers, directors, employees and agents
are neither responsible nor liable for any injuries, expenses, claims, or
liabilities resulting from or related to Holder's visit and Holder expressly
releases each of those persons from any claims arising there from. Holder
grants permission to the Zoo and its designees to utilize Holder's image,
likeness, actions or statements in connection with any live or recorded
video, photographic display or other transmission or reproduction without
payment, inspection, or review by Holder. Holder agrees not to transmit,
distribute or sell (or aid in transmitting, distributing or selling) any
description, account, picture, video, audio or other form of reproduction of
the visit for which this ticket is issued. Pets are not allowed inside the
Zoo."</p><p>So, this license says that no matter what, they don't have to give me my money back. No matter what happens I've agreed not to hold them responsible. They can take as many photos/videos of me as they want and use them any way they want, and I don't get to see them first, or even be notified of it. Finally, I'm not allowed to even show my vacation photos, especially not on the Internet. I can't tell you what I saw, or what they have at the zoo, I can't give you my review of it, etc.</p> <p>Typical all for me, none for you treatment. Do
celebrities get special tickets? That can't afford (and don't allow) their
likeness to be given away this freely. What if I don't agree to these terms? Am I out my money? These are the kind of things a lawyer somewhere thought up, and it gets shoved down your throat, and most people probably never even read the back of the ticket.</p><p>It's these kind of invisible, legally binding contracts we enter into so many times per day, usually without ever knowing it that bug me. America is already lawsuit crazy enough that the zoo feels the need (or the zoo's lawyers feel the need) to limit their liability, and while we are at it, let's throw some other things into the contract. Now if we get a bad review, we have legal grounds to sue. Not that we would, just in case, you know, we needed to.</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-1146759868293679862006-05-04T10:57:00.000-05:002006-05-04T11:29:23.076-05:00Smart Fasteners, Normal People<p>Ok, so I found <a href="http://www.chicagotribune.com/business/chi-0603300225mar30,1,7805363.story?coll=chi-business-hed&ctrack=1&cset=true">the story</a> on smart fasteners, as previously promised. Reading it requires free, mandatory registration, so use <a href="http://bugmenot.com/view/www.chicagotribune.com">Bug Me Not.</a> For the most part, the technology sounds interesting, although I'm not thrilled with the idea that my neighbor will be able to disassemble my car without touching it, but hey, that's the price of progress!</p>
<p>Actually, the thing I really dislike is a sub-current running beneath the article, but never actually stated. Not only might this type of technology prevent thieves from removing your airbag, but it might also prevent you from doing any maintenance on your own vehicle. Or you neighborhood mechanic. After all, these unlocking codes will be pretty valuable, so maybe we should only let the auto dealerships have them. They should have been the ones servicing your car all along anyway, right.</p>
<p>Ok, now for my favorite quotes from the article, starting with the worst statement of all.</p>
<blockquote><p>A potential security breach threat apparently doesn't exist.
"I wondered what's to prevent some nut using a garage door opener from pushing the right buttons to make your airplane fall apart," said Harrison. "But everything is locked down with codes, and the radio signals are scrambled, so this is fully secured against hackers."</p></blockquote>
Now, first, this statement appears to have been made by "Kirby Harrison, a senior editor at Aviation International News, who attended the debut of intelligent fasteners at a trade show in Hamburg, Germany, last year", and not the inventor. However, that doesn't make the statement any less laughable. <a href="http://en.wikipedia.org/wiki/Wired_Equivalent_Privacy">WEP</a> was locked down with codes and scrambled radio signals too, and it is considered next to useless nowadays. Different situations entirely, but the point stands. As crypto experts are fond of saying, anyone can invent a code that they themselves cannot crack.</p>
<blockquote><p> The mechanism that holds auto airbags in place is a natural for intelligent fasteners, said Steve Brown, product development director at Textron.
Installing airbags with conventional screws is tedious and expensive, and it doesn't provide security. An estimated 50,000 airbags are stolen each year for resale, he said.
Intelligent fasteners only respond to radio signals that use appropriate codes. This would prevent removal of airbags by unauthorized people, Brown said.</p></blockquote>
<p>Ok, as if the first statement wasn't sufficient cause for a cracker/hacker somewhere to decide that the system would be broken (and trust me, a direct challenge like that is more than sufficient), this provides us with a financial incentive. Once the system is broken, stealing airbags just got easier. Instead of breaking in with tools, and risking leaving fingerprints and the like everywhere, walk up with your laptop, and watch the airbag disconnect from the car so you can grab it and take off, no other tools needed. Or, just steal the whole car (perhaps using <a href="http://madasi.blogspot.com/2006/05/gone-in-20-minutes-using-laptops-to.html">this method,</a> then disassemble the whole thing easily & at your leisure.Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-1146687520754995222006-05-03T15:18:00.000-05:002006-05-03T15:18:40.790-05:00Gone in 20 Minutes: using laptops to steal cars<p>From <a href="http://digg.com/">Digg</a></p>
<blockquote><p>A look at how thieves are using laptops to steal the most expensive luxury cars. Many of these cars have completely keyless ignitions and door locks, meaning it can all be done wirelessly. Thieves often follow a car until it gets left in a quiet area, and they can steal it in about 20 minutes. Scary stuff.</p></blockquote>
<p>You'd think someone, somewhere would have learned by now that software can and will be broken, especially when it is protecting something of value. There was a report a while back on "smart fasteners", basically bolts & screws that can be unlocked by computer. The uses mentioned sounded interesting, but the article had the same "it can't be broken because we know what we are doing" tone that is just evident of a lack of touch with reality. I'll look for the link to post later.<br/><br/><a href="http://www.leftlanenews.com/2006/05/03/gone-in-20-minutes-using-laptops-to-steal-cars/">read more</a> | <a href="http://digg.com/technology/Gone_in_20_Minutes:_using_laptops_to_steal_cars">digg story</a>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0tag:blogger.com,1999:blog-9896068.post-1139526788551282342006-03-08T23:33:00.000-06:002006-05-03T12:01:27.816-05:00Desktop Defaults<p>Ran across this article on /. a while back:
<a href="http://www.eweek.com/article2/0,1895,1923402,00.asp">http://www.eweek.com/article2/0,1895,1923402,00.asp</a></p>
<p>The author makes the somewhat controversial statement that what ends up on users desktops isn't what's best, but what came there to begin with. I tend to agree, based on my experiences with "end users".</p>
<p>The average end user still doesn't understand exactly what a browser is, or why they would want to try a different one. To many home users especially, Internet Explorer is the internet. They do not mentally separate the program from the activity. (The tool from the task.) This is normal for first time PC users. You want to do task XYZ, well you click here on program ABC. Mentally ABC == XYZ. It's only over time and with experience that they may learn they can do the same task XYZ with program DEF as well! For some people, this is like the dawning of the day, and they begin exploring what alternate programs they can use for the other tasks they do on a regular basis. They soon learn that different programs do the same task better, or just differently, have different features, etc., and find the programs that best fit how they want to perform their tasks.</p>
<p>For others, this situation presents overwhelming choices, and is a very bad thing. These are the people who get frustrated with the computer when they have to choose what program to use. It's like ordering coffee at Starbucks. (Or any food from a replicator on Star Trek.) Sometimes you just want coffee, without having to add 15 modifiers to identify a specific drink. They want the computer to just do certain tasks, and do them well, without asking technical questions about how to do them. Layers of obscurity into how things work are not always a bad thing, at least, not if they can be gotten around easily if the user so desires. After all, this same principle lets you drive a car without understanding how a fuel-injection system works. Do you really care what brand spark plugs are in your car, or know if they are gapped properly? Some people do, most don't. If you had to know to drive a car, would you be finding another way to get to work tomorrow?</p>
<p>Usually, single task devices seem to come first, and are extended into multi-function devices, like a cell phone that can check my e-mail, browse the web, play music and movies, and let me instant message people, plus, oh yeah, call someone. However, the argument can easily be made that this is often at the expense of whatever the primary function of the device was originally, and almost always at a price of a steeper learning curve and more complex user interfaces.</p>
<p>However, some people just want a device that does one thing, and does it very well. Do you want your checkbook to play music and have net access built in? Do you really need a TV built in to your fridge? It is my opinion that at some point the idea of computers as specific machines we sit down at to do things will have to fade away, and be replaced with a type of distributed processing. The processing will all be done in the background out of sight, and we will simply perform our activities where they are most natural for us to do so. If I sit down at my desk at home, I can access my bank accounts and pay my bills. I can also do it from the couch or outside if I want to, but it will be what the system assumes I want to do when I sit down at my desk, based on my normal routine. The same system will display recipes for me in the kitchen, if I want, and will have my music follow me as I move around the house, but not into the kid's rooms when I peek in to pull their covers up after they are asleep.</p>
<p>If a system becomes this pervasive and integrated, most people won't want to know what recipe program they are running, or if their lawn maintenance software version is compatible with their new robotic lawn mower they just bought. Yes, some people will know, care, and love every minute detail. They will have custom interfaces for everything, and their houses will literally respond to their every whim. I will probably be one of these people. However, this will be the exception, because everyone else will just want it to work, and won't care if they are using Blinds 3.0 from Windows Corporation, OpenMyBlinds 4.27 from AOL/Time Warner, or GBlinds 2.7 (Beta) from Google World Domination, Inc.</p>Madasihttp://www.blogger.com/profile/05623570499803249858noreply@blogger.com0