Sunday, February 27, 2005

Software updates

From alpha to beta, two major projects just released their first betas for their upcoming releases:

Thursday, February 24, 2005

Software updates

Mostly just point releases this time...
  • Firefox 1.0.1 has been released. The middle digit still being zero means you shouldn't expect to see any new features, it's mostly bug fixes and security patches.
  • BS Player 1.21 (build 816) - minor bugfix to 1.2
  • OpenOffice 2.0 Beta (m79) - drawing ever closer to the eventual 2.0 release.

Wednesday, February 23, 2005

Securing a MOO

...or MUD, MUSE, MUSH, etc.

Target audience: An experience unix admin running a dedicated moo. Just mentioning the idea may be enough to spark the idea and you can do it on your own, but be sure to read the security section below.
This isn't for everyone. If it sounds confusing, then it's probably not worth trying to do this.

I recently discovered an archived copy of my old MOO from college. Having a cable modem, I thought, why not just host it on my Linux box and let my friends who used to play connect to it for some reminiscing. Then I thought about security. While it doesn't allow interaction with the host environment (my Linux box would still be safe), logins are done via telnet so the character passwords are vulnerable to sniffing.

So I set out to secure it while being as compatible as possible and not wanting to put too much effort into it. Since I'm not planning on keeping it up, just a quick "hey guys, remember this?"

The obvious direction is to use SSH. The basic idea is that users could SSH to my system and from there, telnet to localhost where they could login. Here's the steps I took:
  1. Create a new user (I chose "moo"), make the password simple (e.g. "moo").
  2. Change the login shell for moo to be /usr/bin/mooproxy
  3. Save the code below to "mooproxy.c"
  4. Compile it using: gcc -s -o mooproxy mooproxy.c
  5. Move the new binary to /usr/bin/
/* mooproxy.c */
#include
int main(int argc, char *argv[]){
execl("/usr/bin/telnet", "telnet", "localhost", "7777", NULL);
}


Notes:
  • Be sure to change the port number to match where your moo is running.
  • Change /usr/bin/telnet if your telnet binary is located somewhere else.

Security:
  • Set your firewall to block the moo port (7777 in my case) as the users will come in via SSH.
  • Set your SSH server to disable port forwarding, SCP and SFTP access. ssh.com's server and Vandyke's Vshell server both allow per-user restrictions on port forwarding, but OpenSSH (the default for most Linux systems) does not. Leaving port forwarding enabled for a guest user is worse than the problem we're trying to solve here, so don't do it.

Disadvantage:
  • Customized MUD clients won't work.
  • Mis-configuring sshd can compromize the security of your whole network.

Advantages:
  • Character passwords are secure.
  • MOO server doesn't need changed, it doesn't even need restarting.
Variations on the idea:
If you're insistent upon running OpenSSH and want to allow port forwarding for yourself, but not the moo account, there are other options. You could pick an alternate port on your firewall and forward that to sshd on a different machine. That machine can have "AllowTcpForwarding no" in its sshd_config. It may even be possible to run two sshd's on the same machine using two different config files and listening on two different ports, but I've not tried that.

The only reason the C code is needed is because you can't put arguments to the login shell in the password file. Telnet needs at least one argument and in this case, two. Running telnet with no args would leave them at a "telnet>" prompt where they could then telnet to any TCP port on any system on your network... a bad idea. Using a shell script as the login shell also falls into the bad idea category. If you've got a suggestion on another way around having to compile the C code above, let me know.

If you have an sshd that allows very fine control of port forwarding, it's possible to allow the moo user to forward just to that one port. This would have the advantage of allowing the user the use of their preferred mud client, but at the cost of making it less user-friendly. Most sshd's aren't so configurable anyway.

Have I missed something? Do you have a suggestion for an improvement? Drop me a note.

Software updates

Thursday, February 17, 2005

TiVo Series 1 video streaming

TiVo's added some cool new features to the Series 2 units. In particular, what they call "multi-room viewing". If you have multiple Series 2 TiVos, you can lay in bed and watch shows that the TiVo in the livingroom has recorded.

What? You don't have multiple Series 2 TiVos? But you've got a Series 1 TiVo and a PC (or more). Wouldn't it be nice if you could sit at your PC (or use your HTPC that's already wired to your TV) to play shows that you've already recorded on your TiVo? Well, you can!

It's more involved than a just couple clicks, but a mildly experienced hacker shouldn't have any trouble. Once setup, you'll be able to use a web interface to get a list of shows that are on your TiVo and click one to have it start playing. I installed this remotely for my parents in about 10 minutes (having practiced on my own TiVo) and they love it.

Here's what you need:
Once you've got all those, it's time to start installing. It's been several years since I installed my TiVoNet card so I'll have to refer any questions to some online FAQs.


Installing TiVoWeb Plus
Short version: FTP the .tpm to your tivo and run it, then edit tivoweb.cfg and change TyShowLinks to 1.

Detailed version:
  1. Open your downloads folder and extract the ZIP file. There are four files in there, but we're only interested in the one named TivoWebPlus-1.0-final.tivo.tpm.
  2. Open a DOS window (Start -> Run -> "cmd") and CD to where you extracted TivoWebPlus-1.0-final.tivo.tpm.
  3. Connect to the TiVo by typing: ftp 192.168.1.3 (use the IP address for your tivo)
  4. At both the login and password prompts, just press RETURN
  5. Once presented with an ftp> prompt, type: cd /var/hack
  6. Then upload the file: put TivoWebPlus-1.0-final.tivo.tpm
  7. Disconnection: exit
  8. Login to the tivo: telnet 192.168.1.3 (use the IP address for your tivo)
  9. Change to the hack directory: cd /var/hack
  10. Run the installer: ./TivoWebPlus-1.0-final.tivo.tpm (note the ./)
  11. I think it asks if you want to run it at boot, say yes. (sorry, it's been too long, I forget the exact phrasing)
  12. Change to the new tivoweb-plus directory: cd tivoweb-plus
  13. Unfortunately, it defaults to not showing the clickable links that we need so we'll have to edit the config file. If you've got vi installed, you can use that to edit tivoweb.cfg and change TyShowLinks = 0 to TyShowLinks = 1. If you don't have vi installed, keep reading.
  14. Display the current contents: cat tivoweb.cfg
  15. Copy the output to notepad.
  16. Change the line that says TyShowLinks = 0 to TyShowLinks = 1. It should now look like:
    UserName =
    Password =
    Port = 80
    Prefix =
    Theme =
    DescriptionHover = 1
    MultiDelete = 1
    TyShowLinks = 1
    EthernetInterface =
  17. Copy the new text.
  18. Rename the original config (it will be our backup): mv tivoweb.cfg tivoweb.cfg.orig
  19. Start a new config file: cat > tivoweb.cfg
  20. Paste the text you copied in step 17.
  21. End the file by pressing CTRL-D.
Now you should be able to browse to your tivo: http://192.168.1.3 (again, use your tivo's IP address). Since we've changed the config file, you'll need to click on the Restart link at the bottom and select Full Reload. This only restarts the web server, it doesn't reboot the TiVo.

To see a list of what's on your TiVo, click Main Menu -> User Interface -> Now Showing. There should be a column on the right end of each row labeled "view". It won't work yet, but soon...


Installing vserver on the tivo
Short version: extract vserver-ppc-s1-exec from the .tar.gz and upload it to the tivo as just "vserver". Add a line to rc.sysinit.author to launch vserver in the background at boot time.

Detailed version:
  1. Extract the file "vserver-ppc-s1-exec" from the .tar.gz (WinZip can do it)
  2. Rename vserver-ppc-s1-exec to vserver.
  3. FTP vserver to the tivo as done in steps 3 - 7 above.
  4. Login to the tivo as in step 8 above (if you're still logged in, that's fine)
  5. Go to the startup directory: cd /etc/rc.d
  6. Enable writing to the root filesystem: mount -o remount,rw /
  7. Make a backup of the startup file: cp rc.sysinit.author rc.sysinit.author.orig
  8. Add a line to launch vserver in the background: echo "/var/hack/bin/vserver >& /dev/null &" >> rc.sysinit.author
  9. Set the root filesystem back to read-only: mount -o remount,ro /
  10. Logout: exit
That's all the work that needs to be done on the TiVo side.


Installing TyShow on Windows
  1. Create a new folder for TyShow (e.g. C:\Program Files\TyShow).
  2. Extract the RAR file to the new TyShow folder (use WinRAR - more info).
  3. Doubleclick register.bat to install the codec.
  4. Doubleclick TyExtension.reg to associate the files.

Now you're all set to go
Point your browser at the TiVo: http://192.168.1.3/nowshowing and click the "view" link to the right of a show. If all went smoothly, you'll see Windows Media Player play the show. While it's playing, it doesn't affect the TiVo. That is, you can watch one thing on the TV directly from the TiVo and something else (or the same thing) on a PC at the same time.

Enjoy.
With thanks to all the folks that have worked hard to write these programs and give them away for free!

Friday, February 11, 2005

Debugging web pages with Firefox

You've started designing some web pages. The concept is pretty simple. Just type up some HTML to get the layout you want and type your content in the right boxes.

Expanding beyond a single page, CSS becomes necessary to maintain consistent style. But it's a bit of a pain when you're trying to get things exactly right. Keeping the CSS file open in one window and a browser next to it, saving in one, reloading in the other... there's a better way.

Using the Web Developer extension, you can open the Edit CSS sidebar (screenshot) . It shows a tab for each stylesheet used for the current page and you can edit the contents directly. Remember to save your changes when you're done. The page is updated in real-time so there's no need to hit reload. Trying out different settings to see what works best is very easy. While you're checking it out, look at the other things that the extension can do.

Moving on, JavaScript is the next area we need some debugging help. When there's a syntax error in JS code, it will halt and none of the code will run. It's usually obvious when this happens as none of the JS on your page will work. If you've got the Web Dev extension installed, you'll see a red X on the right (screenshot) corner indicating there is a JavaScript error. Clicking on that button will open the JavaScript console (as would selecting "JavaScript Console" from the Tools menu). The JavaScript Console is a log of errors and warnings from any JS code, not just the current page. Don't be surprised when you open it and see tons of errors... they've been accumulating over time.

It's probably easiest to clear the logs in the JS console and reload the page so you can see all and only the messages pertaining to the page you're working on. For most errors, it will show which file and line number caused the error. To make things even simpler, clicking on it will open that JS file in Firefox's source viewer with the line in question already highlighted.

Moving on to the harder to find bugs... What about all those bugs that aren't syntax errors? You've got some interactive content, but when you click on your button, it's not doing what you want. It's pretty easy to out grow the JS console and need something a bit more powerful. That's where Venkman comes in. It's Firefox's JavaScript Debugger. It's not included with Firefox because most people don't even want to know what JavaScript is, let alone debug it.

To get started, browse to the page you want to debug. From the Tools menu, click on JavaScript Debugger, it may take a few seconds to load. In the Loaded Scripts pane is a list of all the JS files that are in use. This includes all tabs in all windows and even some extensions so the list may be long. Filenames that have a "J" in front of them are regular JS files. Files with a "?" in front are typically web pages that have inline JS. Expand the section for the JS file you're debugging to see a list of functions in that file. Double-click a function name to open it in the Source Code pane. The dashes to the left of the code are lines that can have breakpoints set by clicking on the dash. With a breakpoint set, the program will pause when that line of code is reached. Suppose you've got a table cell with an onclick function that's not working the way you want it to. Set a breakpoint at the top of the function that's called for the onclick event. Go back to the web page and click on the cell. The debugger will come forward with your code active, but paused. The current line (your breakpoint) will be highlighted. Using the Step Over button to walk through the code one line at a time, keep an eye on the Local Variables pane. It shows all the variables that the current function is using. If you want to tweak the code as it's running, there's a textbox below the black output pane where you can type commands (e.g. fields[i].className = "thumbnail").

Those who are already familiar with source code debuggers will want to check out some of the more advanced features. The Local Variables pane has a tab for Watches, right-clicking on a breakpoint and picking Properties gives options for conditional breakpoints and triggers, and there's also some Profiling options in the Profile menu.

Complexity begets complexity. When you're writing JS and XSLTs that take your XML files and modify them so much at runtime that they don't resemble the page source, how do you see what it looks like? Ok, maybe that was jumping ahead too far.

Say you've got a basic HTML file and you wrote a simple JS function that is called when you click a button on your page. Each click of the button adds a new row to a table on the page. But then you notice that the new rows don't have the same background color as the original rows. Selecting View -> Page Source doesn't show the extra rows in your table because they weren't there when the page was loaded. You need a way to view dynamic changes to your content.

The DOM Inspector (sometimes called DOMi) is tool for viewing run-time attribute and structure of a document. If comes with Firefox, and is an option at install time. To use it, browse to your page and select "DOM Inspector" from the Tools menu. The window may look empty, but there's a lot there just waiting to be opened up.

You can expand all the HTML tag and then the BODY tag and browse through your whole document tree that way, but if your page is large and/or complex, it could be hard to find the element you're looking for. The DOMi give us an easy way to find what we want...

Click on the Inspect button in the top right. Now you can see your page in rendered form. If that's not the page you want, put the correct URL in the box at the top and click Inspect again. Next, click on the "Find node by clicking" button and click on the part of your page that you're interested in. The document tree will expand and highlight the element you clicked on while the top right quadrant will show details about that element.

Looking at the list of attributes for the node you clicked on isn't all you can do. By right-clicking on them, you can add, remove or change their values. And those are just the HTML attributes on the element. What about all the properties accessible through JS? They're all there too. In the titlebar of that pane where it says "Object - DOM Node", click the dropdown and select "Javascript Object". At first, it just says "target", but expanding that, you can see all the JS properties, events and functions that are defined on the selected element.

What about CSS debugging? That's in there too! Going back to the dropdown menu for the Object pane, select "CSS Style Rules". The pane splits in half and in the top half, you'll see all the CSS rules that are being applied to the selected object. Rules whose "File" starts with "resource://" are part of Firefox itself (built-in, from the theme or a user stylesheet). In general, the interesting ones you should recognize by their URL. Selecting one of those rules will show all of the properties it assigns below. In there, you can add, edit or delete properties. Keep in mind that changes you make there can't be saved.

Without tools like these, debugging dynamic changes to your page (often called DHTML) involves a lot of guess work and headaches. I've come to depend on them so much that without these tools, AmigoPix wouldn't have all the features it has now (not to mention some of the fun features I've got planned).

Thursday, February 10, 2005

Re-evaluating our calendar system

Yes, the basic calendar: January, February, etc...

We've been using the Gregorian calendar since October 15, 1582. Most people just accept that it's how the calendar is and never give a second thought to how it could be improved. Well, I'm not most people (ok, quit laughing).

There are some features of the calendar that we're forced to accept so let's start with those:
  • The length of a year is determined by Earth's orbit around the Sun.
  • The length of a day is determined by Earth's rotation on its axis.
  • Dividing one by the other gives about 365.2425 days per year.
  • Because there aren't a whole number of days per year, we need some sort of leap year system.
Ok, so what features are left that we can change? Well, there's days per week, months per year, hours per day, minutes, seconds... enough variables to have some fun.

Let's look at a simple example of what's wrong with our calendar. Ask your friends what day of the week they were born on. Do you expect any of them will know the answer? Do you know?

Here's another problem: anything you pay for by the month (cell phone, cable TV, car, house, etc) costs you 10% more per day in February than January.

I'm proposing a new calendar system. Replace the 400 year old Gregorian calendar. We can't go totally metric (bound by the 365.2425 days/year), but we can make all the months be the same length. Not to be too disruptive, let's keep a week as 7 days, but make a month be 28 days. Every month. It would require 13 months to make a full year so we'll have to name the new one something, maybe "Triscadecember". While we're at it, fix the names of September, October, November and December (they should rightfully be months 7, 8, 9 and 10). Doing that, we get a few interesting properties:
  • Every month is exactly 4 weeks, calendars become reusable.
  • The first of every month is always a Sunday, easy to remember.
  • Every American holiday (except Easter) will always be on the same date and the same day of the week from year to year.
  • There will never be a blue moon :(
  • The 13th will always be a Friday (I'm not superstitious, are you?)
Ok, but there's still a few things to work out. Since 13 months of 28 days is only 364, we need one extra day. Let's call that day New Year's Eve. It's a very special day. Since New Year's isn't affiliated with any religion or culture (ok, some cultures do still have their own calendars, but the point is New Year's isn't exclusionary) we're able to give it special status without offending any of the overly sensitive freaks. The special dispensation we make for New Year's Eve is that it's not part of any month, nor is it even a day of the week... it's simply called New Year's Eve and falls between Saturday, Triscadecember 28th and Sunday, January 1st.

Leap years would still occur in the same years that they do now (having to deal with that extra 0.2425 days per year). So we'd have an extra day to stuff into the calendar. The only way to avoid the nice balance we've achieved is to give Leap Day the same special dispensation as New Year's Eve. So the year 2008 would end with Saturday, Triscadecember 28th, then Leap Day, New Year's Eve and then Sunday, January 1st 2009.

Such a system does have its drawbacks. People born on a Wednesday would never have their birthday fall on a weekend. But most perceived problems are just differences. Date books, software, etc would need to be updated, but accounting for New Year's Eve not being a day of the week is trivial compared to dealing with the mess we're in now.

But in the end, it's the transition that's hard, not using the system. Much as we've learned from the metric system: once everyone's using it, things are much easier. Convincing America to change even though the rest of the world already has still seems to be impossible :(

Software updates

New software available as of Feb 10:
That link for Nvu isn't to the project's page (because it hasn't been updated yet). It's the project lead's blog (Daniel Glazman).

Monday, February 07, 2005

The ultimate energy source

We're continually searching for better ways of collecting energy from renewable sources using environmentally friendly means. Over the years, there've been some rather interesting propositions (such as solar panels in space, where there's never a cloudy day, and using microwaves to beam the energy back to Earth). For now, we've got quite a few methods that are being employed and are slowly catching on. Wind farms, solar collectors, dams, etc are helping, but it takes a long time to make the change.

Fuel cells are interesting, but that's just another small step along the way. They burn clean, but how was the hydrogen collected? How did they purify it and deliver it to the hydrogen station? It's not some magical unlimited clean energy source... most of the methods used to produce the hydrogen (generally extracted from water) cause pollution themselves. So it's not really reducing overall pollution, it's a shift in where the pollution is happening (instead of your car's exhaust, it's happening at the hydrogen plant).

Once you've got a clean source of electricity, you've got a clean way to make hydrogen. But it all comes down to finding that source of energy.

This is where theoretical physics and science fiction start to blend. Just like the fuel cell example, anti matter (yes, it really exists; produced at places like Fermilab) can be used as a fuel (stored energy). The problem with anti matter is that it's very expensive to create. Any time that you convert energy from one form to another, you lose some of it (usually as heat and/or light). Anti matter is such a compact form of stored energy that you need very little of it.

How much energy is stored in anti matter? Remember Einstein's classic formula: E=mc2? This is what it's for. If you had as much anti matter as there is water in an ice cube, it would hold more energy than 62 million gallons of gasoline!(ref: 1, 2) Ok, but you read the Fermilab article and you're saying: "Sure, but if it takes 10 million times more energy to make anti matter than you get out of it, what's the point?".

This is where things get interesting. The trick is: Don't make the anti matter, collect what already exists. In 1997, astronomers discovered that there's a "fountain of anti matter" at the center of our galaxy. There's enough energy stored in there that if we could harvest it, not only would all of Earth's energy needs be met, new forms of scientific discovery would be opening. Space exploration and even more exotic things that are currently considered to be "provably impossible" might become possible. For example, transporters (yes, like on Star Trek) are currently "impossible" because you'd need more energy than can be found on Earth (not to mention other details, like we don't know how to do it :).

Ok, so there is a catch: we don't have access to all that anti matter. Trying to send a space ship to get it is currently far beyond our means. At present, we can't even send probes outside our solar system. Think about the distant stars you can see in the night sky. We don't have any means of getting that far, yet by comparison, that wouldn't even be the first step towards reaching the center of our galaxy. If you had a wall mural of our galaxy and put your finger on our sun... your fingertip would not only cover the Sun, Earth and Mars, your fingertip would cover those "distant" stars in our sky. If we can't even dream about getting past your fingertip on that mural, how will we ever get an arm's length away to collect all that anti matter?!

Don't go to the anti matter, bring it here to us. We've known for some time (decades?) that wormholes do exist (no, not like Star Trek)... only on a sub-atomic level. For the purposes of collecting anti matter, sub-atomic worm holes might be big enough. But here's where the problem comes in. We can't create worm holes, nor can we hold them open, or even predict where they might be.

So when will we be getting our free, clean, unlimited "energy from the stars"? Well, I'm not holding my breath. In fact, I don't really expect to see it happen in my lifetime. But it's out there, taunting us. I hope that some day we're able to figure out how to collect it.

Saturday, February 05, 2005

Software updates

Updates from 2/4:

Updates from 2/3:

The future of Television

So you've got a TiVo and you like it. You can watch what you want, when you want. Sometimes you forget that other people can only watch whatever's on at the moment. You encourage your friends to get a TiVo. You can't live without yours, how can anyone else?

Those of us who are "ahead of the curve" can enjoy the benefits for now, but what would happen if everyone had a TiVo? Things would be mighty different!

Looking into my crystal ball, here's what I see: PVRs will become ubiquitous. TiVos will have an option to stream a show directly from the network's web site. Maybe even similar to BitTorrent to help reduce the load on centralized servers. The biggest problem for the networks is that they need to make money off of it. To combat the current P2P model of trading commercial-free shows, ads as we know them will disappear. They'll become inline ads (or maybe even plot topics), a la The Truman Show.

The public will love it, being the ultimate PVR. PPV channels won't convert right away, they'll be rather stubborn about their content, but eventually we'll reach the saturation point and everyone will get their TV over the Internet. When that happens, cable companies will be more focused on Internet service than cable TV (they're already heavily invested so they won't mind the change).

Trouble will arise when the networks want to add DRM keys that require the player to contact their server every time you watch a show. Hackers, of course, will have a way to bypass this system. The public will remain ignorant because their store-bought PVR "just works" and they don't know it's reporting all that for them (ignorant bliss). Initially, it won't be bad, they'll settle for the stats they get by counting downloads from their web site, but when they try to pass a law that either limits the number of playbacks or requires all playback hardware/software to report usage statistics, the public will start to pay attention.

With HDTV trying to become the new standard and the Broadcast Flag killing fair use, it's going to be a rough battle. We've got the potential for a really cool system where everyone can get what they want. The challenge will be big media's greed vs the desires and rights of the public.

I don't think that's all too far off... probably less than 10 years.

Friday, February 04, 2005

Software updates

A new version of NSIS (2.05) was released today.
A new version of Nvu (0.80) was released a couple days ago.

New blog: AmigoPix

I've started a seperate blog for AmigoPix to kick-off the v0.1 release. The techies can subscribe to it and those who aren't interested don't have to be bored by the geek speak :)

The biggest needs I have right now are graphic design and usability feedback. If you can contribute, please let me know.

There's a lot of work yet to be done, but it's coming along nicely. Features of note that will be coming in the future:
  • Web-based admin system to allow editing of titles/descriptions and image rotation.
  • Better sorting options.
  • Statistics to track which pics are the most popular.
  • Virtual folder for things like "New in the last 7 days", "Most popular".
  • Searching pictures by keyword or description.
  • A unique "browse by auto-category" that creates virtual folders based on keywords.

Thursday, February 03, 2005

Gmail invites

I have some gmail invites to give away. First clicked, first served:
  1. Invite One
  2. Invite Two
  3. Invite Three
  4. Invite Four
  5. Invite Five
  6. Invite Six
  7. Invite Seven
  8. Invite Eight
  9. Invite Nine
  10. Invite Ten
  11. Invite Eleven
  12. Invite Twelve
I'm surprised it's still in Beta and hasn't be opened up to the world yet.

If they're all gone by the time you read this, check isnoop.net's invite spooler for more.

Wednesday, February 02, 2005

WiFi everywhere

I recently spent Christmas vacation with my parents. They have a computer and cable Internet access, but no WiFi devices (yet). I turned on my laptop and a quick scan showed 19 (yes, nineteen) Access Points in range! After trying 2 or 3, I was able to browse the web. Now, I must caution that there are added risks when doing this. While most likely, it's just someone who doesn't realize that WiFi goes through walls and can easily be shared with neighbors, there's a small chance that they could be watching what you're doing (you are using their network after all).

On the flip side, I run an open AP. It's named "public" and I serve up free Internet access (well, for anyone within 100m or so). But that doesn't mean I want people to abuse it...

Last night, I notice that my Internet bandwidth is under heavy use. Could I be infected with a virus that's trying to spread itself? Could some other infected machine be hitting mine? I checked the graph, and it was symmetrical. That is, both upload and download were in heavy use. Well, either of my initial fears would have been one sided so it's probably something else. I run some traffic analysis and see that it's BitTorrent traffic. I use BT, but I wasn't at the time. Sure enough, it was coming from my WiFi AP. I double checked my wireless clients to make sure I hadn't accidentally started BT.... nope.

Connecting to the AP, I see that there is another user online. The name is similar to a nearby AP. So I'm sure my neighbor just booted up and didn't pay attention to which AP he connected to. Rather than do anything devious, I simply blocked his MAC addr so next time he should stay on his own AP. Both his client and AP start with the same two letters (capsed) as if they were initials. I clicked on over to Infospace and looked up a list of my neighbors (low turnover rate on my street). Sure enough the initials match the resident right next door to me (explains the strong signal strength).

So now the question is: Do I knock on my neighbor's door and say, "Sorry about dropping your BitTorrent connection from your Toshiba laptop last night, but you were using quite a lot of my bandwidth."? Or is that a bit too Big Brothery?

Ok, I can hear the question already: "If you don't want your neighbor on your AP, why call it 'public' with no encryption?" Well, casual use is fine, but I don't want someone hammering it, sucking all the upstream bandwidth and transferring potentially illegal files with IP as the source. I'll be installed firewall rules to block p2p stuff, but allow general web use.

Tuesday, February 01, 2005

AmigoPix: A different kind of photo gallery

Like many people, I take lots of pictures with my digitial camera and want to share them with friends. The obvious solution is to post them on a web site and send the link out to everyone.

Things get interesting when you consider that I have >10,000 pictures (>10G). Sure, Gallery is nice (and has lots of cool features), but it imports all your pictures into its own database. I really don't like that.

The next point to consider is that I like to use browse my pictures more quickly than via the web (and full screen). Normally, I use ACDSee for this and just roll the mouse wheel for next/prev. If I were to publish all my photos on a web site, I'd have two sets of photos to keep synchronized... what a pain.

My solution: serve the photo gallery from my home web server (I can also map a drive letter for easy viewing). Having evaluated several packages, I found that I like Singapore which has a very nice look when browsing. But I wanted to add features and I don't know PHP. So I started my own project.

In October of 2004, I began coding a simple gallery app in Perl. By the end of the first weekend, I had it up and running on my web site. Since then, I've been working hard to add many new features and it's coming along nicely.

I'll post more as things progress.