Tuesday, September 30, 2008

Wireless support improving in Linux

All too often, people automatically assume they need to use ndiswrapper to get wireless network cards working in Linux. In fact, I was about to do the same thing today, because in other distributions with the same network card I had no luck getting it to work. Once I stuck the network card in, Ubuntu informed me that there were drivers available for the card, and it would have to download the firmware for me. I had to temporarily plug the computer into the wired network to download the firmware. Once that was done, I unplugged from the wired network and the wireless connected right away. (It had the WEP key already) I'm more impressed with Linux every day, and the situation with device support is only going to improve as more and more manufacturers realize Linux is a force in the world.

The card is a Linksys WPC54G, which according to lspci uses the Broadcom Corporation BCM4318 chipset. I'm using Ubuntu 8.04.

After that, I also tried another wireless card, the linksys WUSB54G. I plugged it in, and it detected it and was able to connect to the network with no issues at all. Although, I did notice it seemed to be a little slow. It's a great improvement from what I have previously seen.

Monday, September 1, 2008

Puppy Linux 4.0 Review

After my last review of Puppy Linux, I was very skeptical about the new release. But, I'll say right away that it has improved quite a bit.

From the "mount" icon on the desktop, I was very easily able to mount my USB drive. Even though this is automatic in other distributions, it's made very easy in Puppy, and I suppose the distro is lighter because it doesn't have an auto mount service running. One thing that Puppy needs to do is put an icon on the desktop once it's mounted. It took me a few minutes to figure out that you have to click on mount again, and click on the icon for your appropriate disk. It's a little redundant, if you ask me. I later found an option in the menu which will change this behaviour. If you go to "menu", "desktop", "hotpup desktop drive icons" it gives you the option to put an icon on the desktop when it mounts the drive. I still feel that icons should be put on the desktop by default.

Once I had my USB drive mounted, I was easily able to retrieve my WEP key from the text file which opened in the default text editor Geany. Then all I had to do was click on "network" from the start menu and "pwireless wireless scanner" and paste the WEP key in. My wireless card was detected without any issues and I was able to connect to the internet.

The package management seems to work pretty well in Puppy. It basically uses wget to retrieve packages from your mirror of choice. I would prefer if it automatically chose the fastest mirror, and if the package management was a little more transparent. Instead of showing all the command line stuff that it's doing, it should just show a progress bar in the package management window. Since the focus of Puppy is being a lightweight distro, most of the packages are old versions, but that's somewhat understandable.

All the applications that are installed by default are focused on being lightweight. This really shows with the "Office" type applications. Abiword opens almost instantaneously, as well as Gnumeric, and Inkscape. The lightweight aspect of Puppy Linux is it's strong suite. Everything runs very quickly, even on an older laptop with 512MB of memory. I imagine it would do well with even less.

When you shutdown Puppy for the first time, it asks if you want to save your settings to a file. It also asks you if you want to copy some files from the CD to the hard drive, to improve bootup and overall performance. It was able to do this for me without any problem on a partition that contains Ubuntu.

One major issue for me about Puppy Linux is that the root user is the default account. This means that most people using it will be surfing the internet as root, and doing everything else as root. This may be common practice in the Windows world, but I think most Linux users would agree that this should be changed. It could be setup to use sudo instead, like Ubuntu and other distros. It's just not good from a security perspective to use root for everything.

Another small annoyance I found was that I couldn't change the time by right clicking on the clock at the bottom right. I instead have to go to the main menu and find the set time option. It's a little thing, but something you just expect you can do without hassle.

I also found out after rebooting that my wireless network didn't reconnect automatically. I feel that once I put in my WEP key it should be set and reconnect to that network automatically at bootup. Another one of those things that you just expect will work.

Overall, this release was a huge improvement. I hope the developer(s) will find this review helpful in focusing their development efforts for the next release.

Saturday, July 12, 2008

Using rsync to keep desktop and server music files in sync

It's not perfect, but I've come up with a pretty good way to keep my music folder clean, and in sync with my server. This way, I can put files in the music folder on either the server or on my Desktop, and I don't have to worry about which files are where. It makes sure its the same on both.

My script also deletes the typical files you get when you're downloading music files, like text, urls, jpg's, etc.

I have a samba share permanently mounted on my desktop, so I can work on the server files as if they're local.

So, here's the script.

# Delete all the crap files on desktop before syncing
find /home/username/Music/ -regextype posix-awk -regex "(.*.jpg|.*.ini|.*.rtf|.*.url|.*.txt|.*.log|.*.sfv|.*.nfo|
.*.md5|.*.m3u)" -exec rm -v {} \;

# Delete all the crap files on server before syncing
find /home/username/servername/music -regextype posix-awk -regex "(.*.jpg|.*.ini|.*.rtf|.*.url|.*.txt|.*.log|.*.sfv|.*.nfo|
.*.md5|.*.m3u)" -exec rm -v {} \;

#sync music from desktop to server
rsync -r -t -v --progress /home/username/Music/ /home/username/servername/music/

#sync music from server to desktop
rsync -r -t -v --progress /home/username/servername/music/ /home/username/Music/


That's all there is to it. This will only synchronize new files that you put on there. It won't delete any files, unless they're of the file types specified above. You can change it to suit your needs.

You could even set this up in a cron job and have it syncing daily.

Friday, May 9, 2008

Permanently mount a Samba (or Windows) share in Linux

Samba shares are easy enough to browse to, with the gui, but it's a lot more convenient having it mounted in a local folder. Also, I've found many programs that aren't able to see samba shares, even though the OS can.

First, let's test and see if it will do a temporary mount with this command....

mount -t smbfs //servername_or_IP/file_store /home/user/Desktop/file_storage
Use sudo above, if you're using Ubuntu, or other distros that don't have root active by default.

This will prompt for username and password. Enter one if necessary, or leave blank if you have anonymous access. Once you enter credentials, you should be able to browse to your local folder and see all your files.

To mount it permanently, you will have to add a line to your file /etc/fstab.
This file is what tells Linux what drives you want mounted on bootup. Use gedit, nano, or similar program of your preference to open and edit fstab. You need to add a line similar to these, depending on your exact setup......

For anonymous access, add a line like this. Your Samba share has to be setup to allow anyone to access it. (I'll provide that config at the end) This allows you to mount the share without providing credentials.

//servername_or_IP/file_store /home/user/Desktop/file_storage smbfs guest

To mount a samba share with credentials, you just need to provide the username and password like so.

//servername_or_IP/file_store /home/user/Desktop/file_storage smbfs username=username, password=password

If you're a security nut, or just plain paranoid, you may want to provide your credentials in a separate file.

//servername_or_IP/file_store /home/user/Desktop/file_storage smbfs credentials=/root/.smbcredentials

You can also place dmask=777, fmask=777 at the end of the fstab line to alter the credentials it mounts the folders and files with.

Here's how a typical samba share configuration should look to access it anonymously. This is typically stored in the file /etc/samba/smb.conf

[global]
workgroup = workgroup_name
local master = yes
preferred master = yes
netbios name = storage
server string = storage
security = SHARE
max log size = 1000
dns proxy = No
wins support = Yes
wins server = localhost

[File_Storage]
comment = file-storage
path = /mnt/hda/share/file_storage/
read only = No
writeable = Yes
create mask = 0777
directory mask = 0777
guest ok = Yes

In the global section, "security = SHARE" is the main key to anonymous browsing.
The other line that's key is in the individual share cofiguration of File_Storage.
"guest ok = Yes"
Without those two lines, you'll be banging your head against the wall for hours trying make your share work for anyone, without requiring credentials.

Note: This really isn't the best way to do things as far as security goes. But in my case, convenience wins over security. I have a firewall. My wife wants quick easy access to a large storage drive, as do I. I don't feel the data we're storing on there is of high value to anyone else. But, you'll have to evaluate these things for yourself.

Wednesday, May 7, 2008

The high price of running "legacy" hardware.

A lot of people seem to stick with their old hardware because they can't afford new hardware, or they think it's too expensive to buy / build a new computer.

Let's say I have a system with a Pentium 3 Processor that requires pc133 memory required by that architecture. Most likely, the system will have very little memory, especially by today's standards. If you go to a site like Newegg, $50 may at first look like a bargain for 512MB of memory. That is, until you look at prices of newer types of ram and realize you can get 2GB of DDR2 for the same price. It's almost impossible to find 1GB modules of memory for these older systems, as that was unheard of at the time. Even if you were able to find them, most motherboards at the time didn't support that large of memory modules. Then you start to look around at prices of everything else, and realize that computer components are dirt cheap these days. You can get a dual core AMD processor that will completely smoke your ol' P3 for a mere $56. Of course you'll have to get a motherboard and memory that will go with the processor, but you can get a Foxconn or similar motherboard with everything (sound, video, NIC) integrated for around $50.

Most likeley, you're waiting around a lot with your current system, especially if you're trying to run a more modern OS than Windows 98. Trying to upgrade your old system just isn't worth the price and hassle, for the amount of processing power you're going to end up with. New components are typically much more power efficient than old components as well, and will help reduce your carbon footprint and your electricity bill.

Next on the agenda is CRT monitors. These beasts were big, bulky, and consumed a large amount of power. If you're still using one of these, you could probably cut your electric bill by at least $25/month just by upgrading to a more power efficient LCD. LCD's are just as good or better quality than CRT's, and they're easier on your eyes. Refresh rates of 60hz on a CRT would cause a lot of eye strain, but that's a thing of the past with LCD's unless you have the brightness set way too high.

Another aspect that many may not think of is virtualization. I have a system that's powerful enough to run my OS, plus any other virtualized os's I may want or need to run at the time. This also saves me money, since I don't have to have extra test servers running, sucking up electricity and heating up the room. I've tested all sorts of OS's this way including Fedora, Ubuntu, Windows Vista, PCBSD, and any other beta release I feel like testing at the time. It saves me a lot of time, trouble, and hardware to just load them up as VM's, instead of using physical servers. Virtualization on an old P3 just wouldn't be that feasible.

So really, the best thing to do with those old systems is to donate them to a charity or school, and fork out the cash for some shiny new hardware. You'll appreciate that you did once you start using it and see all the advantages.

Sunday, April 20, 2008

Hardy Heron fixes hard coded color scheme

Just a quick note. I've been testing Hardy Heron since it went beta. Everything seems to be working really well. I just noticed today that the hard coded color scheme as referenced in my previous article (http://happylinuxthoughts.blogspot.com/2007/12/out-with-brown-on-ubuntu.html) that was in previous Ubuntu releases has been fixed. When you change the color scheme of the desktop, and the background color of the login screen, everything works just like it should instead of having a momentary brown background that doesn't match your color scheme.

Thursday, March 6, 2008

OpenSuse 11.0 Alpha 2, Initial Reaction

I've been a fan of the Suse distro for quite some time now, even though it's not my primary distro of choice. I thought I would download the alpha and see how things are coming along.

The installation process was almost identical to 10.3. In fact, most of the screens still showed that it was 10.3. The only difference was the look of the installer. I have to say, it does look quite nice. I was impressed when I was even able to choose my time zone by clicking on the map and zooming to my location.

I picked the gnome desktop to install, because that's what I've grown accustomed to and seems to be the standard these days. It was pretty obvious that it was in alpha stage when I wasn't able to open the system menu by left clicking. I instead had to right click and choose open. Other than that, everything seemed to be shaping up nicely. Being that yast is the main config tool for Suse, you would think it would be on the main menu, but you actually have to go looking for it in more applications. I think that's an oversight of the developers. I'm sure a lot more changes are coming for the gnome side of things, because it looked kind of half and half 10.3/11.0.

The installer doesn't give you the option to install both gnome and kde, so I had to go into yast, and choose all the kde packages. I was curious to see how kde 4 is coming along, so I installed it. Overall kde 4 looks pretty good. I don't like how the menus function though. To expand the menus, you have to click, and then it kind of slides over, and to get back you have to click back. I would say the whole menu system looks pretty nice but is a huge step back in functionality.

If you're going to install the distro at this point, I highly recommend using a virtual machine. You're probably gonna keep it around long enough to check out the new features and test the software. But, I wouldn't say it's completely usable at this time. And, that's not what it was intended for, by developers (That's why it's still Alpha)

Enjoy

Wednesday, February 13, 2008

Just one more reason I can't stand Windows Vista

The task: remotely set up a customer's openvpn client to connect to an openvpn server on windows vista. (on xp, this is a simple task)

Problem: vpn client won't connect.

Resolution: slightly improved; vpn now connects, but doesn't route traffic properly. Must contact vendor for further vista insight.

Due to the "Security" features of vista, it doesn't allow a vpn client to function, unless it's run by Administrator. Funny thing is, the account we were logged in as was supposedly an administrator.

First thing I had to do was show hidden files, and known file extensions. The process was changed for vista for no apparent reason, just like most everything else in vista.

The next thing I had to do was an msconfig. Then I had to select the vpn client, go to properties, and checkmark the "run as administrator" box. After that, a reboot, and the client was able to connect to the vpn and ping the vpn ip address, but traffic wasn't routed correctly. Not able to resolve the routing issue, we now have to contact the vendor for further support and suggestions.

Sounds simple enough, but it took a lot of google searching and experimentation. I see all this as completely unnecessary crap, since it would be a 5 minute task in Linux, even though Linux is a much more secure OS.

Personally, I think that Vista will be passed over in the corporate sector because of all the support hassles it will cause.

Tuesday, February 12, 2008

Secure system to system file copies without Samba or NFS

What if you need to copy files from one system to another, but don't want to bother with setting up file shares (SAMBA or NFS)? It's actually very quick and simple.

All you need is the scp command. The tricky part is getting the syntax correct, but I'll help you with that. For this example, let's just say you're copying a file from one user's desktop to another user's desktop. Here's how you would do it.....

scp filename username@remotesystem:/home/username/Desktop/

Once you provide this command, it will ask you for the password on the remote system. Of course you need the correct credentials or it will fail.

Now, if you want to copy from the other system to your system, you can do that as well, with a quick command line switch.
----
scp -r filename username@remotesystem:/home/username/Desktop/filename .
----
Keep this command all together. Long commands tend to breakup in small formats like this.
The -r indicates that you're copying from the remote system to the local system instead of the other way around. Also, the period at the end of the command indicates that you're copying to the current directory. Without a destination directory, the command will not work, so this is important. You could put any location there as long as you have proper permission. Enjoy.