Just a quick note here… I saw Wikipedia‘s new look this evening and I have to say, I like it.
Category: Online
-
Apology
I got an email from Al Fasoldt yesterday that was in response to this piece I wrote last year. He wanted to know what I was thinking when I wrote it, and basically called me on it. I remembered being harsh when I wrote it, but since it was last July I didn’t remember any real details (it was on the software Hotbar, and Web tracking in general), I went back and re-read what I wrote.
Wow.
I wasn’t just harsh and sarcastic, I was downright nasty. Looking back on it now, it was pretty uncalled for, and I honestly don’t know why I was so rude. So, I emailed an apology to Fasoldt, and I’m doing the same publicly (since I lambasted him here in public): I apologize for being so nasty and writing that entry up the way I did.
-
Blog software chart
For anyone looking at other weblogging software (Jake), this is really good: Blog Software Breakdown. It has a fantastic chart of features on ten popular weblog packages.
This chart displays attributes of different user-installed blog software packages side-by-side for comparison. Only server-installed scripts will be included in this list. (Sorry, no Radio, Blogger, etc.)
Via Ensight.
-
MT Comment
What with the current brouhaha over Movable Type‘s licensing and payment scheme for the version 3 software (what, you want a link? Feh, go Google it), all I can really say is, damn it’s sure handy to have written my own system.
:)
I notice that a lot of people are seriously considering migrating to WordPress. That’s cool, it uses PHP and seems pretty solid.
-
Latitude and longitude
Here’s an interesting site I stumbled upon today: The Degree Confluence Project. From their homepage:
The goal of the project is to visit each of the latitude and longitude integer degree intersections in the world, and to take pictures at each location. The pictures and stories will then be posted here.
Sort of like a blog post for every latitude and longitude intersection on Earth (well, every one on land, anyway). Cool idea. Here’s the nearest confluence to Bend.
This reminds me of another idea I had along these lines after reading an article in Discover Magazine: geographically-based Web browsing. It’s not a new idea, I can’t claim it, but here’s the gist: You have a portable device that’s connected wirelessly to the internet (laptop, PDA, whatever) and is GPS-enabled, so you have realtime GPS coordinates for wherever you are and a live net connection. Then, you browse pages that aren’t accessible via a Web address, but accessible instead based on your current location—tagged by the latitude and longitude fed via the GPS. These “pages” can be like standard Web pages—ads, for instance, for stores that might be close by—or they can be more interactive—forms for users to enter notes tagged to that location that can be read by others. Virtual graphitti.
So, there would pages and content that you could only access while sitting at a certain bench in the park, and totally different stuff that could only be accessed in front of the shoestore downtown, etc. etc. Sort of a cybergeek way to “map” the Web onto the real, 3D world. To find pages you’d have to navigate to the corresponding real-world location. I like the user interaction part of it, too, the thought being that anyone could leave those “notes” for others. That’s pretty key. The term I had at the time for all this was “geosurfing.”
Imagine some of the cultural weirdness this could engender: most content would be tagged to “people-safe” areas like sidewalks, parks, buildings, etc., but there would always be daredevils who would tag a geosite corresponding to the middle of a busy city street or freeway, accessible only to those brave or stupid enough to try. Or horny teenagers (or porn entrepreneurs) would have cached geosites of porn in secret or obscure places (creepy thought: like the end of the pew third row from the back of the local church), or in bars to help enforce adult-only sites. Geosites near movie theaters could have user-posted reviews of what’s showing, or spoilers, and restaurant sites might have similar notes—need to figure out a good wine or recommended dish when on a date? Check the local notes discreetly. It goes on.
The main drawback? No ubiquitous WiFi. So while this might be a cool application to build (the data model and concepts are sketched out pretty well in my head), and might work in a large, well-wired city like San Francisco or New York, it really wouldn’t work at all here in Bend, and that’s obviously where I’d most like to use it. So, filed away for the future.
-
Net Meme Threads
Inspired by Tim Bray:
From We Interrupt This Broadcast by Joe Garner:
The Potsdam communique arrived in Japan on July 27.
Instructions: Grab the nearest book, open it to page 23, find the 5th sentence, and post its text along with these instructions, and point back to where you got the idea so that we can follow the threads.
-
Bots and JavaScript
Here’s something to think about: do any search engine bots and crawlers recognize and parse JavaScript? I haven’t heard of any (and I’m really too lazy right now to do any real research
:)
), but I got to thinking about this today, and there’s really no reason that they shouldn’t be able to handle it.Sure, there’s a lot of cruft and dross in JavaScript code that isn’t relevant in a searchable context, but what about something like I’ve been working on recently: dynamic menus? Each menu item points to a valid page with some contextual link text, but since the menus are generated in JavaScript, the search engine process parsing the content out of the code might easily pass it up and miss the links. Those same links are ultimately being repeated in the actual content of the page, so they’ll be picked up for sure, but what about next time?
Of course, then it would be easy to abuse search engine rankings, by stuffing JavaScript full of hidden and obfuscated content. Perfect for the snake oil of Search Engine Optimization. Even so, though, there might be a lot of content or linkage going unnoticed…
-
The Google Platform
I’ve already seen several links to this today (the first from UtterlyBoring), and it’s too interesting not to point to.
The post in question posits this: Google is a platform. Not a “platform,” used in the same sense that Amazon and eBay are platforms (custom Web applications that allow some programmatic user interfaces), but an actual computer/operating system/development platform—something I had suspected for some time, but I’ve never managed to coalesce my thoughts this succintly.
What is this platform that Google is building? It’s a distributed computing platform that can manage web-scale datasets on 100,000 node server clusters. It includes a petabyte, distributed, fault tolerant filesystem, distributed RPC code, probably network shared memory and process migration. And a datacenter management system which lets a handful of ops engineers effectively run 100,000 servers….
Google is a company that has built a single very large, custom computer. It’s running their own cluster operating system. They make their big computer even bigger and faster each month, while lowering the cost of CPU cycles. It’s looking more like a general purpose platform than a cluster optimized for a single application.
While competitors are targeting the individual applications Google has deployed, Google is building a massive, general purpose computing platform for web-scale programming.
It’s one of the better tech reads I’ve seen in awhile. Very eye-opening.
Now, of course, my curiosity is taking hold, and I’d love to take a crack at developing for that platform!