A Physical Keyboard? Your Insane

    This morning I woke up to find some articles posted online (here and here) more or less asking Apple to add some functionality to the iPhone. The first article was written by someone looking to ditch their Nokia N95 for the iPhone, but with the snag being that they require it has a better camera. While that is technically the only request, he went out of his way to include a mockup of the phone, which also includes 60GB of storage, a slide out keyboard, a front-facing camera for video chatting, and a camera with optical zoom. The folks over at Gizmodo took the idea and tweaked it some to end up with this.
    As I read over these things I can help but wonder, have they not completely missed the boat here? The decision by Apple to not include a physical keyboard, but only a virtual one was without a doubt a huge, nay epic, decision. While the folks over at RIM were scratching their head on how they could possibly make QWERTY keyboard keys smaller, and even resorting to putting two letters on a single key, Apple saw the issue and sidestepped it like the plague. Of course, this decision was very much a gamble, are people ready for this? Are they willing to give up a physical keyboard? The answer is yes. While I can hear the hardcore Blackberry users yelling, I believe this is a case where they need to be ignored (end users don't like to hear this, but there are occasions when its true).
    Want proof? Take a gander at the Blackberry Storm and the Android G1.
    Like the iPhone, the Storm chose to skip over the physical keyboard and in its place put a virtual one with "haptic-feedback" (the much discussed 'clicking'). I had the chance to play with a Storm briefly this past week and found it to be a nice phone, and while it did take me some time to get used to the keyboard I could see myself becoming quite good with it after a few days of use. Oddly enough, one of the common questions/complaints with the Storm is whether or not the 'click' can be turned off and the keyboard can just be used as a touchscreen (the answer is no). While were on this tour d'phone, it makes sense to discuss the     Android G1 which did choose to include a physical keyboard in its design. What feature are the Android developers feverishly working on? That's right, a virtual keyboard. While I have not heard many complaints about the G1's keyboard (granted I don't know anyone with one) people still have a desire for a virtual keyboard, and how convenient it is to be able to immediately type away without having to flip up a screen.
    The transition to a virtual keyboard can no doubt be a unpleasant one, but realistically a keyboard the size of a business card (virtual or not) is going take some getting used to. Before you start pondering how you are going to text while driving on a virtual keyboard, you wont have the problem soon, it becomes illegal in California in two days. Problem solved.
    After having used the iPhone keyboard for a while now, I can say that the only real issue with it is the lack of landscape typing in crucial applications like Mail and SMS. While I do fine with the vertical keyboard, I think the addition of sideways typing could really make it easier to type away on a long email. I am in no way the only person who feels this way, and I know Apple has received may-a-request for this.
    Getting back to the original point of the article, the decision to omit a physical keyboard was no doubt a large one, but it is not something I see being reversed. Virtual keyboards are going to continue to improve to the point that we will look back and laugh at the Blackberries with rice-grain sized keys. So those of you holding out for an iPhone with a full QWERTY, don't hold your breath, or better yet keep dreaming.

Just a few quick note on the other additional features added to the phone:
60GB of space - Sure, who doesn't want more space? That's just a matter of price, SSDs aren't getting cheaper as fast as anyone would like
Front-facing camera - Word is the G2 is going to have this, well see. The type of video you would see (and transmit) on a 3G network in real-time seems pretty awful. While there are physical limitations which constrain this, I think the network is another big consideration.
Optical-zoom camera - No doubt the iPhone camera is beyond pathetic. The lack of even primitive video capture abilities is quite perplexing considering that every cheapish cell today can do this. I think the full zoom is a bit much. Realistically I see improvements to the camera in the future, but its still going to be second-class in the camera department, as are all cellphone cameras.

Blackout

So I finally had some time and decided to update the styling on my blog. The old one was nice, but I definitely prefer this one much more. Its somewhat different from what I originally had in mind, but I think it's for the better.

Bittorrent using UDP

BitTorrent the creators of the very popular uTorrent client have stated that in the next version (currently alpha) they are going to make a proprietary UDP-based protocol called uTP the default protocol used by uTorrent (see announcement). This prompted a fellow over at The Register to predict that this move will cause the internet to meltdown. Of course such a drastic prediction will bring about all sorts of discussion of the issue, see Slashdot posts.

The first thing to understand about BitTorrent and TCP (what it currently uses/defaults to) is that the two are a poor fit. If you think about the way BitTorrent works, it generally creates lots of connections for short lived high throughput transfers. Unlike previous P2P applications (e.g. Kazaa/Napster), files on BitTorrent are split into small chunks, this way the chunks can be downloaded from different hosts simultaneously and likley signficantly faster. Chunks themselves are generally quite small (1MB by default), furthermore, most people tend to have high (broadband) speed connections, thus the duration of the connections to individual hosts is short. These factors make TCP a terrible fit for use in BitTorrent. Basically, when TCP starts a connection it begins sending very slowly (1 in-flight packet) and (exponentially) builds up until it a packet is lost which signals the sender that they may be going too fast. While the exponential increase should lead the sender to reach the maximum send rate quickly it is not quick enough. With the high speed broadband connections available today, it actually takes a reasonable number of round-trips for TCP to reach its maximum send rate. Based on some back of the napkin calculations a 1Mbps link requires 93 in-flight packets to be sent (assuming 1500 byte packet size and 100ms RTT, Throughput=.75*(window size/RTT)). Starting at 1 packet, it will take slow-start 7 round trips (2^7=128) to get up to that rate, and 1Mbps isnt even that fast.... Since the connections made are generally short-lived they will likley spend most of their time in slow start sending much slower then it can actually be.

With all of these factors it comes as no surprise that the folks over at BitTorrent want to make their own protocol which they can optimize for torrenting. Especially since torrent traffic is completely different from the types of traffic TCP was designed for and performs well with. The protocol and the client itself is proprietary which is going to be a point of contention with quite a few people I am sure.

The reason why some have predicted the meltdown of the internet is because UDP does not employ any congestion-control or congestion-avoidance schemes like TCP does, it instead continues to blast away packets as fast as it can. This is the basic version of UDP, the nice thing about UDP is that one add to it the services that they require, and omit those that they dont. TCP already has so many things built it (reliability, etc) that result in poor performance that they simply chose to build it on their own, only including those things that they need while not having to navigate around those things which TCP has already included.

As for now, the UDP based protocol will likley avoid many of the throttling mechanisms put into place by ISPs (e.g. Comcast) to help curb downloading. I dont believe that this is an attempt to avoid these mechanisms since making the traffic UDP allows it to be much easier to distinguish between torrent traffic and other traffic whereas with TCP it is a somewhat more convoluted process. I suspect that this will make it easier for ISPs to classify torrent traffic, the question is what will they do with it.

Update:
I just read this article over at networkperformancedaily.com where the author actually talks to the VP over at BitTorrent Inc. about why they chose to make a protocol, based on what they said it does in fact seem like they are doing the responsible thing, even stating that the protocol needs to be "MORE sensitive to latency then TCP." He also says that it is a "scavenger protocol. It scavanges and uses bandwidth that is not being used by other applications at all.." I think this is the absolutely correct approach to be taking. Just as a kicker to the above article, the first comment posted was by Richard Bennet, the author of the Register's "The Internet is going to meltdown" article.

How about that Facevertising?

    There has been quite a bit of publicity lately regarding Facebooks (relatively) low ad revenue and how much better MySpace is doing in terms of advertising. Well, hows this for advertising? I never thought I would see the day when a company would pay you (via gift card) to become a fan of theirs on Facebook. Initially I scoffed at the idea, I mean paying people to become their fan on Facebook? Seems bizarre to say the least, I mean why does Sears care about having fans on Facebook?
    The more I think about it, the more clever the idea is. Think about this, although Sears is a tried and true name in their business, their reputation with younger audiences is less...inviting. Yes, if my dad is looking to buy a new washer and drier he'd probably look at Sears, but what do I need there? Sears is about to tell us (the younger audience).
    This advertising methodology relies on two main components. The first part is very similar to rebates, sure it all sounds good on paper but how many people will actually go through with it? Sears has eased the process by shipping you the card, but in order to give them your information you need to register on a website within 24 hours of becoming a fan. I'm quite sure that many of the people who do become a fan will miss this little tid-bit and end up giftcard-less.
    Now that Sears has you as a registered "fan" of their store, they have a direct communication channel with you. They are now prime to "update their fans" on all of the new and exciting offerings from Sears, think about how many eyes will see these updates relative to advertising in the newspaper, and at what cost. Furthermore, updates on their site will be displayed on the fans news feed (you know, this one) Thats a nice piece of real-estate that Sears will get to inhabit when posting updates. That is the first page the 125 million Facebook users see when they login to their account, which apparently over 50% do daily (oft quoted but apparently never cited, sorry).
    All in all, although the kneejerk reaction isn't a goood one, I think Sears may be onto something. It is definately an appropriate campaign for them, and I suspect you will start to see quite a few like it pop up quite soon, especially with holiday season coming up.

Read a random line in a (large) file in Python

I've been working on a project which requires that I read random lines from a set of files. Unfortunately these files range between 200MB and 2GB so reading the entire file into memory is incredibly slow if possible at all. I looked around for a simple way to do this with large files and I was unable to find a good one, so here you go:

#!/usr/bin/python

import os,random

filename="averylargefile"
file = open(filename,'r')

#Get the total file size
file_size = os.stat(filename)[6]

while 1:
      #Seek to a place in the file which is a random distance away
      #Mod by file size so that it wraps around to the beginning
      file.seek((file.tell()+random.randint(0,file_size-1))%file_size)

      #dont use the first readline since it may fall in the middle of a line
      file.readline()
      #this will return the next (complete) line from the file
      line = file.readline()

      #here is your random line in the file
      print line


Using this method has shown to be MUCH faster then doing something along the lines of:
for line in file:
     #do something
and furthermore, this method does not make it easy to get a random entry in the file.


Enjoy!

Update: Just a note on the "file.seek" line above, the "file.tell()" call is not strictly necessary since what line you are on currently has no bearing on the next line you will be going to. That would also remove the need for the mod operator since the randint would never be larger than the file. Furthermore, since you are advancing the file pointer to a random spot in the file, if all lines are not an equal length then you will not get true randomness, longer lines will in fact be more likely to be hit. Since, however, the algorithm takes the line after random spot seeked to in the file, this issue is lessened but not resolved. I'm looking for a better and more efficient solution in the meantime.

I (We?) still dont know how to read RSS feeds

      From the time I discovered what exactly that little orange square with the three lines in it was I have been using RSS feeds to keep track of all of the sites I frequent. I think most readers agree that RSS feeds are in fact far superior to keeping up-to-date on all of the websites and news that we want to read. It took the old model of us going out and getting what we want to the new model of what I want comes to me. Its absolutely a huge step and although you wouldn't know it by the numbers, RSS feeds are going to quickly become the norm.
With that in mind...
     One thing that is infinitely entertaining to study and see is the techniques that we all employ dealing with the different parts of our daily lives. Things like email, that we all use, and all have our own ways of managing. For example, when I check email I do my best to ensure that when I am done that my inbox has 0 unread messages. Of course I don't read every email, I tend to do the "mark as read" bit quite often and it seems as if many other people do the same given how prominently placed it is on just about every email site. I use labels/folders and filters to separate out emails into different groups, nothing wild there. I often catch glimpses of other peoples mailboxes and it is interesting to see how they manage theirs. I'm not going to go into details on all the interesting and bizarre things I've seen, but it seems to work for them (at least for the time being).
     So now we arrive at RSS feeds, which similar to email, requires a some time to figure out how to manage. I know that I have 22 RSS feeds which I subscribe to and the range of post frequency between them varies quite widely. On one end I subscribe the the feeds of a few authors books who I've enjoyed and generally post a few times a week if that, and on the other end of the spectrum there are things like "deal" websites which have a few 100 posts a day. In the middle I have the gamut of tech sites (a la Slashdot, Gizmodo, etc) which tend to have roughly 10-20 posts per day.
     Obviously the sites I subscribe to contain a wide range of things I want to keep track of, but at the same time is brings all of those things together, for better or worse. Before, if I wanted to get my news I would browse through a few sites with how many depending on how much time or interest I had at that particular time. In general I tend to look at things in groups, so may start with regular news, then tech news, then a few blogs, etc. So I would get all the information from a type of site that I was interested in, then move on to another. With RSS feeds everything is all in one location, so I can quickly see posts on all of the sites, and it tells me when there are new articles to read. Unfortunately that means I frequently find myself switching from Techbargains to xkcd a bit too quick and I dont properly "context switch" between them. As a result I often lacklusterly browse the posts in the feed when I really wasnt in the mood to read that particular information at that time, and if I had read it later I would likely have done a much better or more thorough job. Given that most people, including myself employ the "only show me unread messages/posts" feature (if I already saw it, why see it again?), we will likely not see that post again.
     All this leads me to the fact that I simply dont know how to read RSS feeds. I am working on my own technique, but it simply takes time to develop. I thought that after already having one for email the rest would be easy, but thats not quite the case.
     I think RSS is one of those technologies that is going to be hear for quite a while, its really a viable way for information transfer (once we figure out how to use it). For a testament to it, and how acceptable it is becoming, open up your Facebook. What is the first thing you see, yep, your "News Feed". Surprised?

An Array of Hope

The very elusive nerd & politics reference, not easy to do....

Thank you Google

For those of us who use Gmail like its the antidote I was happy to see this announcement which is that in Google Labs there is a new feature to add a Calendar and Docs window to the main Gmail screen. Brilliant! After reading one of their older posts about right-side labels and chat I got the sense that some work was going to get done which would allow users to better utilize the space on the screen that many of us look at quite often per day. There tends to be quite a bit of hoopla about some of the more "interesting" features that Google releases (e.g. Google Goggles), but its features like these that really make people appreciate the service that you provide.

I am also glad to see that Google is taking a hard stance on the cloud downtime complaints and fears. This is one of those fears that is continuously bring up when you talk about cloud services. It will be interesting to see what sort of response this gets, I suspect plenty of people out there are going to be investigating.

Why is a cloud OS necessary?

      Ever since Ballmer announced that Microsoft is going to release a cloud operating system I have seen the same question posted over and over again, “Why do we need another operating system for the cloud?” The answer is simple, you dont, Microsoft does. The “Azure” operating system was designed by Microsoft specifically for running in the cloud. The OS is not made for people accessing the cloud, instead it is made specifically for running on servers inside of Microsoft datacenters. Will the OS be made available outside of the data center? For all intensive purposes, no. If it is made available it will likely be limited to use by developers and companies specifically for creating and testing applications which will run in the cloud, and for installing on some corporate servers so they can have seamless integration between their in-house servers and the Microsoft cloud. For example, say a company has a small server farm which they run to host their web application. If they were to run that application on servers running Azure, in the case that their application gets very popular and their servers become overloaded they could send some of the excess traffic to the Azure platform as opposed to become completely Slashdotted into submission. By keeping the operating system consistent it will ensure completely seamless transition between the two as well as an actual test-bed to run applications on before putting them on the platform.
      Seeing as how questions come in packs, the immediate follow up to this is, “isn't there already Windows Server 2008? What wrong with that?” Yes, both operating systems are designed to run on servers, but by making an OS specific to the cloud there are quite a few optimizations which can be made to get even more performance out of the machines. In the case of Azure it is definitely going to be run as a virtual machine inside the data center much in the same way that Linux/Windows images are virtualized on EC2. Virtualization is really the key to making the cloud possible. It allows multiple OS instances to run on the same physical machine while encapsulating them from each other. That way one VM cant mess with anothers data or processes. Also, when someone is done with their VM, they simply replace it with a clean VM image and its as if they were never there. On EC2 they run up to four VMs per machine, I highly suspect that by trimming down their server OS they will be able to get a lot more VMs running reasonably on a single physical machine (I wont speculate as to the exact number, but I think that number will be at least double digits, maybe more?). Keep in mind that as opposed to EC2 where users have direct access to the OS, on the Azure platform users wont have direct access to the operating system. Therefore Microsoft can toy with the number of VMs to figure out the optimal number to run on their servers.
      So in summary, the new OS is not absolutely necessary, they probably could have done it on Server 2008. But, given the scale they are shooting for, and the amount of optimization they can achieve by making a specific OS it is no real surprise they made a new version specifically for this function.

What Microsoft "Azure" really is...

      This morning I awoke to some very interesting news, straight from the get-go of PDC, Microsoft announced "Azure." I was very much interested in what exactly this service is, what it does, etc. so started reading some of the news stories about it, wow, people are confused. I partially blame Microsoft for clearly not giving people a good enough idea about what exactly Azure does and does not do. At the same time I also blame journalists who dont seem to really know whats going on (yet dont seem to mind). What is really going on? Lets talk about it.

      Today, at PDC, Microsoft announced Azure, its cloud computing platform. Just so it is clear, Azure is also the name of the operating system that the Azure platform will run on (previously dubbed Windows Cloud by Ballmer). I will address the "Why does the cloud need another OS" question in the next post, for now lets just stick to the platform. For those of you unfamiliar with the cloud, it effectively breaks down into three different levels of services which can be provided (see this Tim O'Rielly article for a full definition of the three layers). Azure falls into the platform layer, which is the middle of the three layer hierarchy. As the title suggests, this layer provides a “platform” which developers can utilize to write and run their application. For example, lets say that I want to create a web application which allows users to input their contacts and access them from anywhere. If I were to do this using a framework like Rails (with Ruby) or Django (with Python) then after designing the application itself, I would also have to set up a database which would be used by the application to store the data like user login information and the actual contacts themselves. Using the platform layer, instead of running a database myself, they (the service provider) run the database for me, and I just access it though the provided interface. I still determine what tables I want, how I want them to be laid out, its just that instead of actually writing the SELECT statement, I call a SELECT method. It is up to the provider to figure out how they should host the database and make it scale. Similarly, I as the user dont have to set up the platform and the underlying operating system, this is handled by the provider, note how this is different from the infrastructure services (a la Amazon EC2). Basically, I would write the contact application, making sure that the database interaction would go through their API, once I'm done (with this release) I then hand it over to the provider. It is up to them to figure out how to deploy it and scale it up and down as necessary. Again, very different from EC2 which requires the user to determine how to deploy it and manually scale up and down the number of machines (or use a third-party service like RightScale).
      Ok, now that you have a better understanding of what a platform and platform service is, it should be clear that Azure's main competitor is Google App Engine. Many people will lump Azure and Google App Engine together with EC2 but in fact they provide very different levels of service. Think about it in terms of a car. Amazon EC2 is really the infrastructure, its as if someone gave you the frame of a car, it is up to you as the car-builder to find the appropriate engine, tires, and seats depending on what you intend on doing with it (drive on the freeway vs entering in a monster truck rally). With platform services its as if you were given the frame of the car, with the engine, and tires already installed (and you cant remove them). The key parts of the car are there, its up to you to put in things like the seats and gas in. Notice however that you dont have a choice on what engine and tires were provided, so they may fit your needs, but they may not. You have less control of the infrastructure in the sense that it is provided for you and you cant change it, but you also save yourself the complexity of having to figure out what engine would fit in your car, how many cylinders etc.

Got it?

Shameless Windows Vista bashing

      So for the millionth time I somehow find myself reading a Vista bash article (this one) and once again THEY GOT IT WRONG. Just to be clear, it is supposedly an article about Windows 7, which apparently implies that it is a forum to bash Vista. The more articles I read the less I respect online tech journalism, but I digress. So a quick summary of the arguments presented against Vista:
      1.Incompatibility was and still is an issue, albeit the tip of the iceberg
      2.UAC is annoying
      3.Vista is bloated
      4.Vista tried to hard to be pretty
      I suspect that if have read other articles about Vista you have probably seen some combination of this complaints in the various other articles. This is by no means groundbreaking stuff, in fact I'd bet I could probably find a few articles from 6 months or even a year ago that present just about the same argument. Even still, that doesn't prevent you from getting it wrong.
      Incompatibility was definitely a big issue when it came out, there is no getting around that. Basically, the second hardware vendors saw that Vista was getting a lukewarm reception they put Vista drivers a few notches lower on the list (aka they weren't going to do it, or at least quickly). Then, when some user goes to install their Brothers printer and it doesn't work they get mad. Is that Vistas fault, or the hardware vendors fault? No reason to point fingers, it just stinks for the end user. In either case, Vista post SP1 incompatibility is unlikely to be the issue. Sufficient time has elapsed for even the slowest of vendors to get Vista drivers out. If they have no done so already, I definitely put the blame on vendors, they have had long enough.
Good ol' UAC seems to be pissing a lot of people off, does this not strike you as odd? OS X and Linux have had the equivalent of UAC since their inception, but no one seems to complain about it there. Yes, UAC likely pops up a lot more often then you are used to in OS X and Linux, but in all reality, if it is popping up, it likely should be. People have become so accustomed to the pre-Vista Windows where nothing pops up to notify you that some terrible website is trying to put a Trojan on your system. Realistically that is just a poor model for security and is likely to result in a lot worse results (keeping a clean system) then the UAC model. It is a lot easier to prevent malware from installing, then it is trying to remove it once it has already been.
      I think when these writers think of Vista they get the image of the Micheline tire man running in their head. How exactly people determine that Vista is “bloated” is a very subjective process and in all likelihood does not imply that they have any actual knowledge of the underlying operating system. Much of this perception comes from people seeing the amount of RAM Vista uses by default compared to other OS's. Its hard not to make that association, but it simply is not valid in this case, not by default at least. Vista has a feature called SuperFetch which proactively puts often and recently used programs in RAM even before it is started by the user. Why? Its simple, if they guess what you are going to start and put it in RAM before you start it, start up time is drastically faster then it would be otherwise. So when someone looks at Vistas RAM usage and sees that it is using 2GB on start they are wondering “How is it using so much RAM? I haven't even started anything!” Chalk a lot of that usage up to SuperFetch, and hope that SuperFetch uses as much RAM as possible. Using RAM is GOOD, thats what its there for, and as long as SuperFetch gives it up when necessary all is well in Windows world.
      Even still, the Vista bloat argument continues. Consider this, when Macs changed from PPC to Intel hardware it was a HUGE change. It required a new OS, programs to be rewritten, and just about anything Apple related that was PPC became outdated. They effectively drew a line in the sand and said it stops here, anything older then this we are done with. What happened? People bought new Macs. Those left with PPC's were left to wither away out in the cold. This wasn't too terrible for those users since Mac users (and Apple users) in generally tend to be a lot more dedicated to the brand then most people are to just about anything. Lets say Microsoft employed this strategy with their next version of Windows (clearly they arent with Windows 7, but lets just say for the sake of argument). I don't think that any of us could imagine the amount of uproar such a decision would make. Even though Microsoft wouldn't be changing the hardware (they don't make their own hardware, Apple does), the software changes alone would bring about so much complaining we wont be able to hear ourselves think. The sheer amount of labor required to rewrite software for the new Windows would be enormous and an absolute nightmare. You think people are pissed about Vista, the amount of headache this would cause would make Vista seem like a blip on the radar. Based on the amount of people who use Windows, and the amount of software written for Windows it is going to be much more difficult for Microsoft to draw that line in the sand.
      Finally, there is always the claim that Vista is trying to be too much like OS X and be too pretty. Its no secret that Microsoft significantly dressed up Vista compared to it predecessors, and its hard to argue that Vista doesn't look better. Whether or not it is worth the resources it draws is a matter of preference. To me, it is. So Vista is trying to be too much like OS X? Really? Are we talking about specifics in the UI, because I don't see many. Your not going to see Windows with a “dock” any time soon (especially now since its patented), I cant see Windows using left side close and minimize buttons, or adopting the silver theme by default, so where is the copying? If you are simply making the case that they are trying to make Vista pretty, then you cant fault them for that, part of being a successful operating system is being pretty. Dont believe me, ask Linux? One of the big pushes in Linux over the past few years has been beautification, and it has paid off. Compare your distro of choice now to how it looked 2-3 years ago and you will see that it is likely a lot prettier, shinier, and slick overall. Linux demonstrates this case so beatifically, pretty is a very important part of being a good OS, like it or not.
      I suspect with Windows 7 hype continuing to build, more and more of these articles will be resurrected from the murky puddles they belonged in. Please, if you are going to make an argument about Vista being “sucky” think about it before you just regurgitate the arguments of others.

Is Google really a threat? Part 1:The Browser

Lets start with Google Chrome, when word hit that Google was creating a browser it seemed as if everyone and their parents were downloading Chrome to give it a try. Market share for Chrome quickly jumped up to 1 or 2% in the days following the release, an impressive feat, but in weeks since has slowly fallen. Unlike most of the products which Google labels beta, Chrome is very much still a beta. There are still quite a few issues which remain unsolved with Chrome along with quite a few glaring issues. Even still, I have no doubt that there are Googlers working away at right this moment to fix these issues and improvements will be made. Getting back to the original question, is Google really a threat in the browser market? The short term answer, not really. Lets categorize people for a moment to see why. 
"Techies" were very likely to try out Chrome and I would assume they accounted for a significant proportion of that initial 1% as evidenced by many tech sites receiving  significantly higher percentages (ReadWriteWeb saw 7%, Silicon Valley Insider saw 6.6%). Unfortunately for Google, they did not release an add-on framework at the time of release and many geeks are so hooked on their Firefox plug-ins of choice that they couldn't imagine switching to Chrome permanently (AdBlockPlus and Firebug in my case). I'm sure there are many people typing away at these, but so far nothing. 
But techies are only a collective few, what about other people? People browsing at work whom are employed by large companies is a large contingent of people (and not mutually exclusive with techies), how is their Chrome usage? In short, not good. Upon its release Chrome had quite a nasty looking EULA which may people believed was akin to signing over a first born to Google. While not true, it was a firmly worded EULA which had a lot of people concerned which Google has since rectified. Even still, so many large businesses were concerned about that EULA and as a result told employees not to download Chrome until it had been reviewed. Considering how strict many companies are about using non-IE browsers, it will be difficult for Chrome to gain significantly large market shares in this slice of the world.  Finally, there is the general population.  "Google what?" is the most likely response you will hear when asking the general population what their opinion is on Google Chrome. As ubiquitous has the Google name has become, the reality is that to many people the internet is still that e with a halo on it on their desktop, it is just that simple. Simply consider the fact that last year over 30% market-share belonged to IE6, yes the same IE6 that came by default with Windows XP circa 2001. The same IE6 which is mired with security flaws and whose rendering abilities are the bane of most web developers existence. While that was last years numbers, many sources have seen IE6 holding fairly strong with a 25% market-share this year. If this is not evidence of the "Internet=Desktop e" theory then I don't know what is. 
So then, what does this all mean? It means the while Chrome has quite a few innovative features (process per tab to name one), it is still lacking some of the features to draw the techies over. Employees at large companies are unlikely to use Chrome since either their legal department hasn't given it the OK, and may never. For the general public Chrome is still what lined their favorite Ford Thunderbird back in high school. By Christmas time this year I can see Chrome getting to about 2-3% market-share without really breaking a sweat. Following that I think Google is going to have to make some big announcements to remind us all what Chrome is, and why I should switch over. So to answer the question, no, Chrome is not a threat. 

Is Google really a threat?

In the past month or so Google has been involved in the release of two very highly touted products in Google Chrome and Google Android being released on the TMobile G1. Both of the announcements have generated enormous buzz and having the Google name attached to both of these products have really fueled the fire. But with the release of these two products I ask, is Google really a threat?

Google Gaming?

Forbes recently made a wildly speculative article about whether or not Google should and will enter the game arena. I'm not sure if the Forbes folks have been watching too much of the crazy news with Keith Oberman, but Google, making games, what are you on? Lets get one thing straight, the only games that Google with be making will be the ones that would come with Android by default or the ones of gOS. Why on this green earth would Google decide to go after gaming? Yes, gaming is a booming market, and yes there is advertising revenue that can be generated from games, but lets be honest, Google making games is about as likely as Toyota making MP3 players. Now it has become clear that Google is willing to spend money developing technologies that help to feed their search (read: advertising) business. Google Docs is the perfect example of this, by having you store your documents in their cloud, they have the opportunity to take a look see and make there add recommendation just that much better. I assume the belief is that they would enter the game market by making games and not making actual consoles since the console market is only ready for the suicidal. So Google makes a game, and then they make some sort of advertise bar which displays in it (a la Net Zero), seems logical enough right? Wrong. Assuming they were to make an immersing game, and not just a pass the time waster (hi Freecell) ads are a terrible idea. Back in my hay day I used to play a few games, things like Counter-Strike, Command and Conquer, and the likes, and if there is one thing I learned from games like that is that once you are in games like that there is very little chance you are going to click an ad which escapes you from your full screen game, much less even look at them. Microsoft recently implemented ads in Xbox live, so when browsing around the demo section you will probably catch an add or two, but not if they are dumb enough to attempt to put ads inside of a game like Halo or Tiger Woods Golf.
The only real advertising I could really see happening in games is product placement like they do in TV shows where someone like Pepsi will pay to get some member of a TV show to take a sip of their soda on camera. Cant you just see Tiger taking a nice big swig of Pepsi after blasting a 350 yard drive down the 18th of St. Andrews?

Oversight of the century?

Its really unfortunate, I was walking into the bathroom of the lovely Chicago O'hare airport and I saw one of the greatest photo-ops I have seen in a while. There was a woman dressed in a very serious dark suit sitting on the floor with her laptop on her lap, funny but nothing special. Heres the kicker, shes halfway cramped in between a newspaper stand and a pole, likely one of the most uncomfortable sitting positions I have seen someone in such formal wear in. So why would a woman like this put her self in such an uncomfortable position? Simple, there was an outlet nearby and she was charging her laptop. Unfortunately she was packing up by the time I got out of the bathroom so I didnt get a good chance to snap a picture of it, but I think you get the idea. Now why the story about me using the restroom and people sitting uncomfortably? Because it points out one glaring defect of our nations airports, and many other sites (including universities) which lack power outlets. People, in case the pigeon didn't make it to your neck of the woods with the message, we are in the 21st century, the information age, remember those tube things?
In the past few years the amount of electronic equipment that the average traveler carries with them has gone up significantly (% increase from 0->1 is infinity). So when people are traveling in the airport between flights aside from eating, what are people looking to do? Thats right, charge their , whether its laptops, iPods, phones, people always need to charge something. No one wants to be on the flight where there battery dies halfway through because they were watching Californication in the airport before their flight. The only problem is that since most airports are fairly old, there is no way they could have accounted for the vast amount of electronic outlets that would be required by the present days business traveler (Come on, try to find somewhere to charge your blackberry, I dare you). So what are we left with? Airports which have very few outlets, none of which are marked, and often in very awkward places (I know we've all considered using the outlets which are under the check-in counter). The situation is so bad, that even companies have started to take advantage of the glaring oversight. In O'Hare airport Verizon has setup a counter which is simply 10 stools and divided little work areas each with two power outlets solely for the purpose of allowing people to charge their stuff and if they happen to get bored they can stare up at there big advertising board directly over it. Thanks Verizon, appreciate the consideration, but thats just the tip of the iceberg, and those stools are incredibly uncomfortable. I'm not electrician, but it seems to be that doing some wiring to get a whole bunch more outlets would be fairly trivial and would not take long at all. But theres a lot of airport, and a lot of gates to cover. So lets make it easy on you. How about just labeling the walls or posts where there are outlets so we don't have to hunt around the airport like a bunch of crack addicts looking for the next fix (crackberry anyone?).
Considering how little traveling I do compared to the very sharply dressed gentleman next to me who is also clamoring for an outlet it is clear that I am not the only one who has these thoughts? Considering the burlesque show that we have to put on going through airport security can we at least get some power outlets? I mean if thats not the oversight of the century, I don't know what is.

Why buy the cow

In case you haven’t heard, the excellent website Hulu will be premiering many primetime TV shows either on or before their on-air premier dates. In case you weren’t aware, Hulu is a very good online video site which has a good selection of TV shows and movies which can be streamed and is shown with minimal advertising. For a 30 minute TV show there are generally 3 commercials which are 15-30 seconds long. I was reading this article on Wired which brought up the interesting point, with Hulu airing the premier of Prison Break on the same day it will premier on TV, why have over a million people downloaded it (illegally) via torrent? This situation is not unique. On October 10th of 2007 Radiohead released their album “In Rainbows” only on their website. Fans were allowed to pay as much as they wanted (including $0, free) for the album, which was available by download. Well, no surprise when shortly thereafter the album appeared on torrent sites. Again, if it is available for free why would an estimated 2.3 million people download it (illegally) via torrent?

Note: while I would never encroach on either of these actions, I may know a few that do, and so it is their coalesced opinions and my own spin that I include here.


The answer is not made of up a few different sets of people.
First, there are the people that didn’t know about that availability, and therefore went to their main method of acquiring that content, torrent. The same way they have done for all the other episodes of Prison Break or Radiohead albums, fire up the tracker of choice and download away. These people cant be blamed, although publicized, the number of people who knew about them being available was still a very small set of all of those whom would otherwise be downloading. They might be on the front of the tech-section of every online news/blog site, but you have to read it to know.
Next, there are people who heard about it, but were too late. They had already downloaded the show/album before they heard of the promotion. At that point they could go back and watch it on Hulu or download it from the Radiohead site, but much of the allure is lost, although both sites have their revenue opportunities intact which I’m sure they are pleased about.
Finally, there are those people who did not agree with the terms under which they would get the media. Ads in the case of Hulu and a required email address and an ask for donations on the part of Radiohead. To be honest, I really don’t find either to bad at all. Ads on Hulu are significantly shorter then those on primetime TV, you generally only see one per break, and you have all the abilities of a DVR (pause, rewind, etc) without needing one. Radiohead simply asked for an email address (no spam to date) and ask for donations (which they made clear could be $0, no CC# required). What’s not to like? Some people just wont give in…what can I say?

As for these two occurrences, I think they are incredible. They demonstrate an embrace of technology by the two industries which have classically fought it off like Ugg boots. I just have two comments about these:
1. Hulu, why on earth do you not have an iPhone app out yet? Are you waiting for me to make it?
2. Radiohead, why not make the album available via torrent, you save on the hosting cost and make it more widely distributed? Seeing as 2/3 of the downloads came from torrent, why not focus on those people, and hitting them up for donations instead?


A Promise

It has become really clear to me that in terms of Vista advertising bloggers are just going to bash, bash, bash, and then bash again. Much the same way they treat the operating system itself. As opposed to posing responses every time a new commercial drops, my official stance is "see message below." Everything that was applicable before, is still applicable, so no reason to needlessly post more updates. With that, I promise no more blabbering about the new ad campaign, just see the post below and you will get my opinions.

The new Microsoft ad

Well, it’s out. And if there isn’t going to be enough gasoline in this fire I figure I might as well throw my two cents (worth of gasoline) in.

I’m not really sure what people were expecting this ad to people. Based on my very empirical study this ad was expected to be the hippest, coolest, most ultimate Vista rocks my world ad…featuring Bill Gates and Jerry Seinfeld. In case you were disappointed by the distinct lack of heavy rock music (a la video game commercials) or some sort of play on the Apple ads, you clearly haven’t been paying attention. If you had read my previous article entitled “Don’t be surprised by Seinfeld”, you would have been much more in tune. Is it going to make you fall off of your chair laughing? No. Was it direct shot across the bow at Apple? No. Was it edgy/risky/crazy? No. It was Microsoft.

It was mildly entertaining, but only since it relies two of the most famous faces in the world (t-minus 10 years ago of course). The premise is a play on BillG’s well-known…thriftiness…. And has some shout-outs to those nerds who have been paying attention. In terms of Seinfeld, its roughly the Visa ads, except without Superman.

I think the real surprise of it all is the lack of heavy branding or slogans. Spanning the entire minute and thirty seconds of commercial the word Microsoft was mentioned once, in passing, by Seinfeld. Then there is the slogan “The future. Delicious.” Followed by the Windows logo. No Microsoft logo, no mentions of the word Windows, or dare I say…Vista.

I’m sure pundits are savagely pounding at their keyboards about this one, “$300 million for this!” or “that’s going to beat Apple?!” Stop it. Please. Just, stop it. First, I’m very sure that this is the first of many (random guess says 5) commercials to come out of this campaign. Second, like I have said you will not see direct shots at Apple. Why? By “competing” with Apple, Microsoft would be admitting that they are on their level. Believe what you will, Microsoft won’t be admitting that any time soon.

Kindle the college demographic

There have been some recent news articles about how Amazon is planning on entering into the textbook market by making a textbook version of their Kindle eBook reader. I think I speak for everyone when I say, finally, what took so long? Some of us (me) have had had this idea for some time now (three years) and was hoping someone would do it (I have the design to prove it), and do it right. The college textbook market has been almost comically redicilous, but the current options are so poor that they can still dominate. So many of us have tried buying books online and although prices are much better, there is a significant hassle associated with it. In most cases you have to wait until the first day of class to get the syllabus, only then can you figure out exactly what books you need (especially editions) and order them. Media mail is awesomely cheap, but coupled with slow processing times can make ordering online an extremely slow process. The alternative is to download textbooks via torrent. All I can say is having three huge textbooks on a flash drive is absolutely awesome. Furthermore, being able to search the text within the books is almost priceless.
       Clearly the Kindle is a very intriguing idea, especially in the textbook market. For a long time people have said that eBook readers were just flat out sub par. Hard to read, poor battery life, small memory were the most common complaints, and the Kindle seems to have solved some of these (battery life is mediocre).
       In my reincarnation of the eBook reader I took a different route then the Kindle. I really didn’t see the need for a keyboard, all you really need is a power button, two buttons to flip the pages. In the case of textbooks I though it would be very cool to have some highlight functionality, so add a stylus and you could highlight portions of the text.
       When I look at the Kindle there are a few things that I can only hope for which would make the Kindle a useful tool. First, some form of highlight/tagging feature which allows you to select parts of text. Then once selected you can iterate through all of the highlighted items for a quick review of the most important parts. Next, if Google’s $700 stock price has taught us anything, search is key. There is a keyboard on the thing, please make use of it and allow people to search through text books. Sometimes you need a quick reminder of the truth table for a clocked d-latch, and if I cant search in the textbook everyone will just resort to searching for it online, or spending a while finding it in the book.
       I read an article which touted the Kindle as the next iPod. Please. As innovative and interesting as the Kindle if it was 1/10th as popular as iPods I think it would be a success in most people’s books. Its price point has to drop significantly from $350 for it to even have a chance.

Dont be so surprised by Seinfeld

I’ve read so many articles about Jerry Seinfeld apparently becoming part of the new Windows Vista ad campaign that I am about ready to explode. Most articles are quick to criticize how Seinfeld as a long way from his popularity 10 years back or how on his show there was always a Mac in his apartment (apparently that makes him the biggest fan boy of all). To address the ludicrous first, I have yet to see any article or interview that actually have Seinfeld saying he owns or even likes Macs. I have no doubt that the computer in the show wasn’t an accident; if you have ever seen advertisements and more recently commercials you will notice that all computers are mysteriously white and curvaceous. Macs are pretty, there is not question about that so why would they put anything less in their commercial? Yes, it was mostly an Apple IIe which may not be pretty now but you don’t even want to see the PC’s back then. Show me an interview, article, picture, then we can talk…

People seem perplexed by them picking Seinfeld, I mean weren’t they supposed to pick someone hip, cool, and just flat out AWESOME?! Not to be brief, but no, absolutely not. If there is one thing that Microsoft surely is not going to do it, its try to out-cool Apple. Criticize as you please, but Microsoft is not dumb, and they know that trying to out-cool Apple is a loosing battle. Now I was surprised as any when I heard the news, I didn’t even think Microsoft would have a “spokesperson” for the campaign since that is an extremely risky venture. But, if you are going to pick someone, Seinfeld is an excellent candidate. When you think about the target market, as much as they would love to go after the hipsters, Microsoft has a somewhat older market to cater too. So although Seinfeld has been out of the limelight for a little while now, those of us who are old enough to remember the show like Jerry and don’t have one bad thing to say about him. You wont find pictures of him on TMZ smoking outside a club or running over paparazzi, in fact, aside from his show, Seinfeld lives a quiet, unpublicized life.

As George said:
George: Well, I got just the thing to cheer you up. A computer!
Huh? We can check porn, and stock quotes.
I probably should bring up the episode which involves Jerry and computers, the white laptop crowd will have a field day..

Laptops in Corp Land?

I started reading this article over on Arstechnica and the more I thought about it I still see a fundamental flaw in their logic. To quickly summarize, the article makes a case for why very soon desktops will only “appeal to a niche market” even in the corporate world. Since employees are traveling significantly more and mobile broadband provides reasonable coverage laptops will soon dominate the corporate market. The article continues about how laptops serve as a viable extension for ones personality, and if you’ve been paying attention, you will know how annoying I find that.

Even still, I will concede the home market. If you have been paying attention you would have heard that notebooks now outnumber desktops in terms of sales and I don’t see much reversing this trend, in the home market.

The corporate market, are you kidding me? Clearly the author sees the corporate employees as a bunch of busy bees just buzzing their way around the country for months on end. Only returning home to the hive for the occasional meeting and suckle of honey (take that as you will). Perhaps Mr. Reisinger has been eating too may dinners out with his journalist cohorts because although people are traveling more, not nearly on the scale that he is implying. Without sounding too green, I might venture to say that most non-sales people probably spend about the same time on vacation then they do traveling for work. There are, however, practical uses for laptops which don’t require an airport or gas stations. Having a laptop to take around the office, into meetings, even home can be quite convenient and is probably where most laptops get their use. When you think about these three cases you will mention the fundamental struggle with laptops: Size/Portability vs. Usability. For the most part, the bigger the laptop the most usable it is (bigger screen, full sized keyboard, track pad) and the less portable it is (larger, heavier). Yes, some laptops make better use of space then others, but it’s rarely by much. Based on my empirical (and utterly unscientific) study I’ve found that the median home laptop is 15” while business laptop would be 14”. Like I said before, this is because business users travel more, albeit still not very much.

One thing that becomes apparent when looking at laptops however is that they are not even close to desktops in terms of usability. Clearly a smaller monitor has a pretty large effect on usability, but even worse is mice. Trackpads are the most popular “pointing device” in laptops and basically the only improvement we have seen in them over the past many years are the scroll bars. Multi-touch you say? Meh, wake me up when you can do something interesting. And if you have the ThinkPad “TrackPoint”? Ouch.

Wait a minute, why not just buy a docking station and you get the best of both worlds. Well you smart cookie, that’s a good point. You get the portability of a laptop since you can undock it and take it with you, and when its at your desk you can use full size peripherals. Unfortunately one other thing you will get with a dock is a lighter wallet, they do not come cheap at all. Why pay $300 for a docking station when you can get a whole new desktop for $500?

If you aren’t yet convinced that laptops wont be taking over the corporate landscape any time soon you should take a look back over this article, because you must be missing something.

Why waste your money?

Throughout the past few years quite a few people have asked me for advice on buying computers, whether it’s because I’m a Computer Science guy or Jewish is not necessarily clear, but I sense (hope) it’s probably both. In most cases I get the standard “I just want to surf the web, check some email, and the occasional X” where X is some software that requires a somewhat more powerful machine (e.g. photo/video editing, gaming, etc.). Interestingly enough, my answer over the past few years has been the same and it is this; “You have two choices, A) you can bargain hunt for a few weeks and find a Acer/Compaq/HP deal with decent specs that will run you $500-600 depending how hard you look or B) Get a coupon for a Dell laptop and spend $1000 and only pay ~$800.” Of course, since I also fall into that category, I practice what I preach and have ordered combo B with a large soda a few times. To be honest, as long as you do your homework you will find a good machine which can handle all of the WOW you throw at it.

With that in mind, I am baffled when I talk to people who have and buy $1500+ laptops. Before you get offended, if you require a quad-core, 13”, 2 lb. laptop (or an Apple) then by all means, close your eyes and hand over your plastic. Those crazies aside, (I knew was going to offend people) I have seen a lot of expensive laptops being purchased and they all tend to share two key things, they look sexy, and they have very mediocre spec’s. Clearly a sexy looking laptop is a very important feature, I know that when I am out about on the town that I cannot be seen without a sexy laptop to match my handbag. Now I am not going to lie to you and say that Macs/Dell XPS’s/Sony Vaio’s are ugly because they clearly not. Yet, I find it difficult to justify paying so much more for a nice looking laptop when realistically most laptops look pretty nice. Before you boil over about my mediocre spec comment I’ll explain. One would think that if you were shelling out almost twice the amount of money that you would get a beast of a machine. Not so fast, the low end models are generally WORSE then the low end of less sexy laptops. So…you pay more to get less? That is correct. But…if you want to pay MORE money then you can actually get a sexy laptop with decent specs.


Note: For those of you who are squirming out of their seats about how Macs don’t require as good specs to run decently, hold onto that thought, I’ll be there soon.

Workspaces in Windows: Redux

I was talking with some coworkers about random technical stuff the other day and I had a good shot to bring up the workspaces question. Given that both guys are very intelligent, articulate, and considered "power users", I was interested in their take on it. The first point they made was that if they wanted to do workspaces in Windows, they would have done them. It is not a technical limitation or a legal issue, it was a matter of choice. That means they didn't believe that enough people would use it, and too many would be negatively impacted (see prev. article) to have it. Yeah, that sounded about right, but now "aren't there still enough people out there who want it to have it available, but disabled by default, or only in a professional/ultimate type edition?" I asked naively. To my surprise I found that even within the "technical" community or "power users" a large portion dont utilize workspaces and or dont find them useful enough to install them.

Realistically, if they cant convince a significant portion of more technical users to ask for it, or use it, then its pretty clear why they wouldn't include it by default. I guess those of us that want it will have to rely on third party tools to implement them for us, although so far I have not really been happy with what I have seen. Guess I will have to continue looking harder.

Workspaces in Windows

For some time now I have been left to wonder why there are not "workspaces" in Windows. In the time that I have been using Linux I have found multiple workspaces to be incredibly useful. Being able to have windows open on different workspaces is a really great way to have quick access to information, but not have it constantly distracting you away from work. I usually ended up with a layout something like this:
1. Browser w/ email, sports page, slashdot
2. gEdit, terminal, browser for programming reference
3. Either a different programming window (like #2) or a paper I'm writing or reading
4. Usually empty, otherwise it could be another space like 1-3

Now initially it doesn't seem like much, you could easily navigate around a Windows desktop with those Windows open. So what the big deal? Separation. Having email/sports/slashdot open, refreshing in the background, is just too tempting. All too often I find myself flipping over to that window just distract myself with Slashdot for a while.
So Linux has workspaces, and as of recently, OSX now has them as well, that only leaves Windows.

There are mainly a few arguments for why multiple workspaces are worthwhile. First, many users (myself included) find that workspaces improve productivity greatly. In the jobs I have worked at I often have seen co-workers with TONS of windows open and they have quite a time managing them. For those of us who don't have multi-monitor setups (or even if we do), workspaces provide a lot of extra room to work with.

Now if they are all I have cracked them up to be, why wouldn't Microsoft jump through rings of fire to make them? Well there are a few reasons why I would see them not doing this. First, there is probably a large population of Windows users who would likely not use them. Think about the grandmas and moms who use their computers to check email and look at pictures of their kids online, they either wont know about them, or not seen any use for them. This doesn't seem that bad since they would effectively be in the same position as they are now since they would not use the workspaces. It turns out not to be so simple, one could imagine how many times windows would magically "disappear" to other workspaces and they would have no idea how to get them back. Given that number of users who use Windows on a daily basis, the number of times this would happen would be astronomical.

I think a reasonable solution to this would be to only include workspaces in upgraded Windows editions, so leave it out of "Home" editions but include it in "Professional" or "Ultimate" editions. In doing this, workspaces would be provided to business and power users, while home users wouldn't have to deal with them, everyone is happy.

Originally I wrote about some of the patent issues that have come up with Red Hat and Apple from IP Solutions, and how that would affect Microsoft's decision to include that feature. In looking into it there seems there is more going then I originally noticed, so I'm going to do some homework on the topic before I comment here.

Tune in next time when I talk about how file sharing and BitTorrent help Web 2.0 technologies

To touch or not touch

I've had this opinion since the 3G iPhone release, but am only getting to write about it now.

I think I am part of a very large contingent of people who where (and still are) awaiting the release of the iPod touch V2. Since it was first released I knew that the touch was an incredible product and enviously played with a few of them. Music, of course, video, excellent, and just to top it off Wifi. Being able to connect to Wifi hot spots at great speeds and not having to pay a cell provider a bunch of money for a limited service are huge plusses for the touch. Yet there were a few things that held me back from jumping on one from when it first came out was space, and cost. Upon its release the touch came out in 8 and 16GB sizes which carried the price tags of 300 and 400 respectively. In talking to my compatriot Phil we both came upon the same conclusion, its simply not enough space. 16 gigs on a device which can store video, including full length movies, is insanely small. This was coming from the guy who was having trouble squeezing just his music onto a 20GB iPod photo. I decided that I would wait till the second round of iPod touches since they would increase capacities to 32 and hopefully 64 GB and make some software improvements.
I was really disapointed when I found out they were going to release a bigger 32GB touch, but it would still be the first gen. Mainly, they added this on top of the other two sizes, without a price drop, which left the largest model with a hefty price tag of $500. This also meant that they would not be releasing a touch V2 anytime soon.
Fast forward to today, and I am still torn about what to do about it. Surprisingly, Apple has yet to drop the 8GB touch which means prices have stayed the same. To pour a little fire on it, Apple has continued to stick it to touch users by charging them "nominal" fees for software updates. There have been (I believe) three updates so far which have been free for iPhone users, which touch users have had to pay for. Why Apple? I don't see how Apple can justify charging touch users but not iPhone users, either charge both or none.
I guess for now I'll continue to wait and hang on to my old school iPod. Perhaps soon enough Apple will decide to lower prices on touches and toss in those updates for free and then perhaps ill change my tune and put my money where my mouth is.

I'm back, with content

Today I decided to take a stroll and get some dinner. From this came two things, first, I made the unfortunate decision to eat Arbys, which I have not had in ages. Second, I had a chance to think about a few ideas which had been bouncing around in my head. So here are the fruits of that labor.

About Me

Hello and welcome to everyone visiting, glad to have you here. My name is Jonathan Kupferman, I recently graduated from UC Santa Barbara with a degree in Computer Science. This fall I will be returning to UCSB to get a Masters in Computer Science. I would like to do some research or a project relating to cloud computing or services offered in the cloud, but that is TBD for now.

The main purpose for this blog is simply a place to write my own take on many of the things I see and read about in the tech world. Whether its ideas I have been thinking about or comments on things others have written it ends up here on my blog. I welcome comments or opinions on any posts I have made since these ideas are always up for debate.

More information can be found on my website jkupferman.com.