In the circles I run in, you can’t go 15 minutes into a conversation without someone bring up Network Neutrality. In my world, it’s a huge deal. And, in my world, there are people on both sides of the issue. If you don’t know what Network Neutrality is, then just pop open your favorite search engine, and type in ‘network neutrality’. You’ll find tons of articles and videos about the subject. All of them asking you to write your elected official, or sign a digital petition. The problem is, for the most part, no one really cares. Sure, there’s a small group of technology people that consider Network Neutrality as one of the most pressing issues of our age, but for the most part, regular folks aren’t getting the message. With so much info available about Network Neutrality from both sides of the issue, you would think people would be all over it.
The Heartbleed bug was caused by a programming error in a software package called OpenSSL. This error had the potential of allowing bad people to attach to secure web and email servers, as well as services that rely on the TLS/SSL protocol, and steal the private encryption key off the servers. The TLS/SSL protocol is what puts the pretty little lock in the address bar in your browser. The private key is what the owners of the sites you go to are suppose to keep secret, and not share with anyone because if someone has it, they can decrypt the encrypted data traveling between your system and the server. THIS IS BAD…
Heartbleed – What is it? (for geeks)
The Heartbleed bug was caused by a programming error in the OpenSSLÂ library that deals with TLS handshakes. A couple years back, a new RFC (rfc 6520) proposed a new extension to the TLS protocol that would allow a heartbeat to be exchanged between the client and server to reduce the number of re-negotiations during a TLS session. This all sounds good, and actually is a very beneficial to the protocol in general, but when it was implemented in OpenSSL, an error in the way the code was written allowed a request to grab a bunch of data without checking the boundaries of the data itself. This could allow someone to make a request crafted in a certain way that would cause OpenSSL to return 64k of protected memory data possibly containing the SSL private key of the server.
Samba 4 as an Active Directory Server – Can it dance the dance?
Two weeks ago I thought to myself ‘Gee, now that Samba 4 has a real release out, wouldn’t it be fun to test it out and see how it holds up?‘ And so my adventure began. Now mind you, I’m not a novice to Samba, or to Active Directory, so I figured this would be a simple setup and test. How hard could it be?
Seems like most of my job now a days is looking at large systems and isolating problem areas. Things like performance problems, data corruption, or even failure analysis. Many of these systems have several independently managed processes, all tied together in a single forward facing application. Over the years, I’ve developed some methods of approaching system failures and problems that gives me a better chance of quickly evaluating and repairing the issues that plague these systems. I used to believe that these methods were only valid on larger system models, then, one day, a colleague of mine and I were sitting in a small coffee house discussing a problem they were having with one of the desktops they manage. While we exchanged ideas, I suddenly realized that I was using the same mental process on this little desktop as I did with the large cluster systems.
The End of IPv4… The Adoption of IPv6… “The King is Dead!, Long Live the King!”
At a ceremony today, February 3, 2011, the last five /8s were delegated to RIRs. For most people, this has little meaning, but to us that make our livings from the IPv4 protocol, and who have spent countless years learning the tricks of the trade, this marks an end of an era.
As for me, I’m ready for the ‘big switch’ to IPv6. But I know many of my friends and colleagues that have procrastinated, claiming this day would never come, or are waiting for a vendor to swoop in and save the day. Well, to those I say, WAKE UP! The companies you work for, and the customers you service will be greatly effected by the IPv4 shortage and the logical adoption of IPv6. The day is at hand, and vendors stand to make their money by just selling the upgrades to their equipment to handle IPv6, so I don’t think a magic bullet is in the cards. As of now, the best solution for your company to look at is dual stack. In as short a time as a year, you could have customers that are unable to reach your web based services, or only able to connect at modem speeds to them, due to overloaded proxies. I strongly suggest you start working on this now, especially if you have outward facing services such as a web server or email server.
That’s it for now, I’m busy preparing for the Southern California Linux Expo. This year it will be held at the Hilton LAX on February 25-27, 2011. Look forward to seeing you all there!
Back in December `09, my company ACT USA, began testing IPv6. These tests quickly advanced to our production environment. Over the last six months, I have been in the process of setting up native IPv6 connectivity for all our data centers. This connectivity is based on the dual stack model. This article attempts to cover the technology available, and the choices I made based on that technology.
I recently was asked to put together a brief web presentation on the different methods of creating redundant networks. I couldn’t think of a better place to put it, then right here on my blog. After all, I was overdue for a post anyways…
What do I mean by redundant networks?
A redundant network is two or more distinct paths for data to travel to and from an upstream network. In it’s simplest form, it can be a piece of equipment that can be manually placed into service easily upon a failure. More often though it is set up so that any single device or connection can fail, and without user intervention, a backup system or connection will automatically step in and take over the job of the failed device, or connection. A redundant network does not mean that no mater what happens, your data will still be reachable. There are many factors that need to be considered, ranging anywhere from your providers, to your applications, that can cause a failure.