Saturday, December 31, 2005

AMD64 3400+ is faster than Pentium EE?

I just realized that my AMD 6400+ laptop was actually faster than Intel's Pentium Extreme Edition 3.72 GHz CPU after reading the following link:

http://www.tomshardware.com/2005/11/21/the_mother_of_all_cpu_charts_2005/page24.html

Wednesday, December 28, 2005

Connecting Holux GM-210 GPS Receiver to Laptop

I bought my portable GPS receiver a few years ago. It was designed to connect to my Sony CLIE NX80V PDA. Somehow, since then I almost never used it. Perhaps because it was too cumbersome to put in my card. I thought I could connect to my laptop, but no it couldn't, because the connector is different.

Now, somebody has posted a way to connect it.

http://astro1.panet.utoledo.edu/~igor/GPS2Clie.html

Some Optimization Flags on GCC

Yesterday, I was playing around with some optimizations options supplied by GCC (ver 4.0.2) on my AMD64 laptop. I created a very small code to compute sin(x) * cos (x). As we know, Pentium and newer processors support a FP machine code to compute these two functions simulatenously in single mnemonic code: FSINCOS.

the code is as follow:

#include

inline double fsincos(double x)
{
return sin(x)*cos(x);
}

First, I compiled it to assembly source with no optimization enabled, just using defaults as follow:

> gcc -c -S fsincos.c -o fsincos.s

and then open the generated assembly code in fsincos.s. Overthere I saw somewhere it called internal library functions for sin and cos trigonometry functions as I expected. I then recompiled it with full optimization flags enabled:

> gcc -c -S -mtune=athlon64 -mfpmath=sse -msse3 -ffast-math -m64 -O3 fsincos.c -o fsincos64.s

Suprisingly, it still called GNU's "sin" and "cos" math functions!. What? I said (not loud, though). I checked again the gcc man page, nothing special about these things. Hm....let's try to add "387" in the mfpmath, I said in my mind.

So, the command is now as follow:

gcc -c -S -mtune=athlon64 -mfpmath=387,sse -msse3 -ffast-math -m64 -O3 fsincos.c -o fsincos64.s

Voila! now the assembly code called inline FP code "fsincos" in it. Since then, I add an environment variable in the enviroment as follow:

export CFLAGS="-mtune=athlon64 -mfpmath=387,sse -msse3 -ffast-math -m64"
export CPPFLAGS=$CFLAGS

X86_64 Assembly Nightmare

A few days ago I was trying to compile mpg123 application. It is a command-line MP3 player. I begin with command:

make help

It gave some options. First I tried "make linux", still failed to compile with a bunch of errors. I then tried "make linux-3dnow", or "make linux-3dnow-alsa". All of them failed to compile at some assembly files (*.s). I look at one of the assembly-code files, and saw that the codes were designed for 32bit platform, while my laptop was running x86_64 Suse v10.0 (64-bit Linux). I did change "pushl %ebp" etc. to "push %rbp" etc., the compiled went OK, but when I tried to run the program, it crashed with segmentation fault message.

Hmm...I then went to AMD website and download all the documentation. These PDF documents are huge (each PDF file has more than 300 pages). I started reading one of them. Will post any progress later.

Stay tuned!

64-Bit Power Struggle Heats Up

AMD still is the winner, at least for now.

http://www.eweek.com/print_article2/0,1217,a=167278,00.asp

Sunday, December 18, 2005

Fixes on FFTW for 64bit

I just downloaded FFTW library from http://www.fftw.org and tried to compile it with full optimizations and 64bit-enable flag. Many files were compiled OK, but when GCC tried to compile sse2.c, it failed with complaints:

Error: suffix or operands invalid for `push'
Error: suffix or operands invalid for `pop'
When I digged into the code I saw that it was using 32bit register ebx. I then changed it to rbx (64bit version of BX) and recompile it, it works.

Still need to test it rigorously whether the would inadvertent effect with all the optimizations. By the way, here is my optimization flags to GCC compiler:

CFLAGS="-mtune=athlon64 -msse2 -mfpmath=sse -ffast-math -m64 -O3"

I am in 64bit Now!

A few days ago, I did installed SUSE ver 10.0 64-bit edition to my Compaq Presario R3000z laptop. After backing up all the data to my new SEAGATE external hard drive (its 200 GB, so has plenty of space), I resized the Linux (reiserfs) partition from 15 GB to 25 GB, plus I added an extra 5 GB FAT32 partition to share files between Linux and Windows XP (in case I use Windows :-).

The standard SUSE does not came with NVIDIA's binary-only enhanced driver, so the graphic was standard. I then downloaded the 64bit driver from nvidia.com, but had problem installing it. Somehow downloading the 64bit driver from YaST worked out. I don't know what was the problem, but anyway it work OK so I could change the resolution to 1680x1050 and the graphic was fine.

Another problem was the wireless device. Linux 64bit does not like 32bit driver, so the old Broadcom driver for WinXP did not work. After googling up, I found the driver for Win 64bit (somebody posted it at www.planetamd64.com. You must register to the site to be able to download it, but it is free registration). Yet, the ndiswrapper still could not load the driver. From dmesg result, I saw there was one error saying "ntoskernel.exe: RtlZeroMemory" was an undefined function. Hm...

One problem was that the *.conf under /etc/ndiswrapper/bcmwl5 had different devid. I checked again using lspci -n showed a different devid. I just created a symbolic link to the existing conf file, but the link file name is now has the correct devid. Still ndiswrapper failed to load the driver. Oh yeah, I forgot to mention that I always set the environment variable CFLAGS="-mtune=athlon64 -msse2 -mfpmath=sse -ffast-math -m64", as well as CPPFLAGS.

I googled again, and found one posted message with the same problem. The guy later on said that he then upgraded the ndiswrapper to the latest one and it worked. I then gave it a shot. You know what? It was working!

The next tasks for me is to recompile xine (the one that comes in SUSE Linux does not support reading DCSS-encryption to read commercial DVD movies), also I need to recompile some others to the 64-bit.

Thanks to Google, I had successfuly make my laptop wireless work! I am getting more in love into Linux now. I rarely boot my laptop to Windows anymore. Only when I need to run "Age of Empir

Friday, December 2, 2005

It is now Era of Parallel Computing!

Everything is now done in parallel! AMD and Intel are pacing to get first to the market with their dual-core, quad-core or even octa-core processors, although I believe AMD won the 64bit and dual-core in the first cycle. Xbox360 which was released a few weeks ago (and still hard to find in the stores!) also use parallel processing capabilities. Sony's PS3 even will go farther by using 8 cores [well, it may not really like 8-cores in conventional processor. See http://www.ps3land.com/ps3specs.php].

The hardwares have come! Now it is turn for sofware developers to use this parallel computing power or not. That's the biggest concern now, especially on PC world. Many developers, many of them are making business sofwares, are still thinking parallel processing is overkill for most of softwares people use in the offices or homes. To edit documents, browsing or reading/sending emails, we do not need parallel computation. That may be true. But for power home users who do video editing, transcoding media files (e.g. from WAV to MP3 or MP3 to OGG format, or DVD to DivX/MPEG4 format), or doing 3D related processing (games, vector-based applications, CAD, raytracing etc.), parallel processing is a big deal.

Programming parallel computation is not an easy task. It is very complex and mindsqueezing task. First, we have to decompose computation process/function from serial to parallel into "grains" (the smallest part of computation that cannot be decomposed anymore) and maximize independences between each grains. Then, these grains have to be distributed to each processing unit. We need also to consider communication and processing time for each processing unit, especially if we want to load-balance the task. To make the story short, it is non-easy-at-all job for software developers.

A few weeks ago I read on some magazine (forgot, may be E-Weeks or InfoWeek), Microsoft now turning its eyes into parallel computation. In one conference, Bill Gates told the forum about it. This is a totally new world for Microsoft, as this is mostly dominated by UNIX-based Operating systems (including Linux). Even the fastest supercomputer, IBM's Bluegene, runs on a custom-tailored Linux O/S.

If you are a software developer, learn parallel programming now to grab this new job opportunity! Google, Microsoft, Intel, AMD, Sun, IBM, Nanotechnology companies, research labs, BioTech/pharmaceutical companies, game studios and many other companies are looking for the talents to start coding their parallel-processing computers/system. Remember Folding@HOME or SETI? these are a few of the parallel-programming tasks you need to master.

64bit or 32bit?

Novell's SUSE has released its Linux distribution to the latest 10.0 (and 10.1 is underway in the development). At opensuse.org, I saw there is also ISO file available for Linux for 64bit processors. A few weeks ago I downloaded 5 ISO files for installation CDS from opensuse.org (and just recently converted them to a single DVD. Just follow instructions in http://www.opensuse.org/Making_a_DVD_from_CDs).

I booted up my AMD64 laptop with the DVD, but then I changed my mind. Not to upgrade my Linux now and stayed with the older version (SUSE 9.3). One of the reasons is that, my Linux partition is too small for the 64bit (I have only 12 GB out of 80GB HD of my hard drive for Linux, the rest is for WinXP). I searched the internet and found some discussions saying this 64bit requires almost double as much space as 32bit version. This is because there is /lib64 and /usr/lib64, in addition to /lib or /usr/lib, for 64bit so we can run applications with both versions. Also, 64bit files are generally larger than 32bit, because some of the processor's instructions require extra bytes to handle 64bit operations.

Sometime ago, I read on the Internet that ndiswrapper might not work in 64bit environment. But I saw there was 64bit version of it on the CD, so I believe it is now supported. This was another reason I was scared to upgrade it in the beginning. Also, NVIDIA and ATI now support 64bit version of their graphic processors. The remaining issues are minors, I guess.

What are the benefits of using 64bit system? OK, first of all, it can handle huge memory space (hundreds of terabytes, instead of just 4 GB as on 32bit version). Another thing is, theoritically it should be faster. Why, you might ask? Because the processor can transfer data as twice as much in the same duration than the 32bit system. Also, security gets a big benefit because now it can directly handle long integer computation directly down to the machine code. I recall, with 32bit environment, we need to handle large integer computation manually using some algorithms.

Anyway, this is just my hypothesis. I have not tested it yet nor benchmarked it. I will do it soon, but I need to get my Seagate's 200 GB external Hard drive first to backup all my data before doing this. Will post it here as soon as I've done it.

Saturday, October 15, 2005

Big Databases

InformationWeek last month had a column mentioning who/what companies that have the biggest database in their servers. The following is the top ten commercial databases with the gigantic size:

  1. Yahoo, 100.4 TB. Platform: Unix
  2. AT&T Lab, 93.9 TB. Platform: Unix
  3. KT IT-Group, 49.4 TB. Platform: Unix
  4. AT & T Lab, 26.7 TB. Platform: Unix
  5. LGR-Cingular, 25.2 TB, Platform: Unix
  6. Amazon.com, 24.8 TB, Platform: Linux
  7. Anonymous, 19.7 TB. Platform: Unix
  8. Unisys, 19.5 TB. Platform: Windows
  9. Amazon.com, 18.6 TB. Platform: Linux
  10. Nielsen Media, 17.7 TB. Platform: Unix
It is not wondering to see Yahoo is in the top list, because they provide 2 GB/user. If there is 20 million users in US itself, it is 2 GB * 20 million = 40,000,000 GB = 40,000 TB. So I still don't understand why it says only 100 TB.
Or, may be it is has less than 20 million users in US? Likely.

With current commodity hard disks available in the market, a 400 GB HD can be bought at around $300 (I just checked shopping.com, it actually costs less than that), so with 3 drives we could get 1.2 TB storage space with cost only $900. To get 120 TB, it costs only $90,000.

To see the full detail, see www.wintercorp.com

Thursday, October 13, 2005

Free Wi-Fi Access

Google has started beta testing of its free Wi-Fi service in a few San Francisco locations. I don't know how the access to users look like, but the big concern for us is how secure it is to people, especially when people access sensitive data (e.g. e-commerce, etc.)

Anyway, this is a good news and I would admit to say that Google is one of cool companies to work for. Kudos to Google for its initiative!

Friday, September 23, 2005

Firmwares for WRT54G

OK, now I am going to tell you about a few firmwares that crash my linksys (luckily I have revived the router, thanks to the instructions posted at http://voidmain.is-a-geek.net/redhat/wrt54g_revival.html.

The working firmware on my router is Tarifa 0003. Unfortunately, I have never been able to make it as wireless bridge (WDS), eventhough I follow the instructions from the Internet (I guess it was at www.linksysinfo.org).

Anyway, after failing to flash the nvram with another firmware, I was able to recover my router which almost become a brick. :-)

WRT54G Revival Guide

Here I just repost from http://voidmain.is-a-geek.net/redhat/wrt54g_revival.html, in case his website is down. The steps below works perfectly on my new Linksys WRT54G v4.

/* Void Main's WRT54G Tips */

{ Red Hat Tips(); } else { main(); }
» Linksys WRT54G Revival!
#include <stddisclaimer.h>

When might you use this tip?
- If you forgot to set your "boot_wait" nvram setting and uploaded a bad firmware image which caused your router not to boot (like I did)
- You failed every other instruction for reviving your router
- You like living on the edge and just wanna play

Pros: Turn your black and blue paper weight back into a working wireless router.

Cons: I suppose you could make your WRT54G even deader than it already is, although I have not actually heard of anyone who has done this. The pictures in this tip are for people who have the v1.1 hardware. It works for the 1.0 and 2.0 versions as well but the board layout is a little different in the other hardware versions. You'll just have to find your flash chip.

Tools Required: Small jewelers screwdriver (or any other small pointy metal object).

Ok, I'm convinced, let's get this baby working!!

Let us begin:

NOTE: Click on the thumbnail images in this tip to zoom in on the image.

Find a nice open area to rip this baby apart:

WRT54G v1.1

As you can see, the one I use in my example is a v1.1 router, your board layout may be different:

WRT54G v1.1 - hardware version number

Use your fingers to unscrew the antennas from the back:

WRT54G v1.1 - unscrew antennas WRT54G v1.1 - unscrew antennas

This thing just snaps together, no screws involved, so just "pop" the blue face plate off. I find the easiest way to do this is to turn the unit upside down and place your hands between the feet on the side, then push on the blue feet with your thumbs:

WRT54G v1.1 - face plate snaps off WRT54G v1.1 - remove face plate

Now the board just slips right out the black cover:

WRT54G v1.1 - remove board from case

Now locate the flash chip. On my board it is clearly labeled "Intel Flash" but I don't believe all routers are labeled like this. Click on the pictures below for a better view. You will see that at each corner of the chip is a large white number. My picture is actually upside down (you didn't think I would make this easy on you did you?). Notice at the upper right corner of the chip is the number "1", upper left is the number "24", lower left is the number "25", lower right is the number "48" (all upside down in my pictures). Between the number 1 and 24 you will see a row of 24 silver pins. On the board above the pins there is a little white line every 5 pins that should help you count.

WRT54G v1.1 - Intel flash chip WRT54G v1.1 - notice white marks every 5 pins

Now comes the fun part. Do not plug the power in just yet but plug a patch cable into one of the 4 LAN ports on your router and plug the other end into a computer (my laptop works great for this). Configure your network card on your computer with a static IP address: IP: 192.168.1.2, NETMASK: 255.255.255.0, don't need a gateway address. Now if you are in Linux just type "ping 192.168.1.1" which will start a ping running. If you are in Windows (shame on you) then I think you have to pass a "-t" param (ping -t 192.168.1.1) so it doesn't stop trying to ping after 4 pings.

Ok, now for the nitty gritty fun part. Locate pin 15 (third white mark starting from pin 1). Take your jewelers screwdriver (philips head is what I used, nice and pointy) and stick the point between pins 15 and 16 (see NOTE1). While holding the screwdriver there, plug in the power and watch your ping screen. Hopefully you will be amazed (like I was) at seeing the pings starting to succeed. Don't be so happy that you drop the screwdriver on the board and start sparks flying. Remove the screwdriver and the pings should continue:

WRT54G v1.1 - short pins 15-16

The router is now in failsafe mode and is waiting on you to tftp a firmware image to it. Find any good firmware image for your router and upload it. In linux it might go something like this:

$ cd /home/voidmain/firmware
$ ls
OpenWrt_b3.bin
$ tftp 192.168.1.1
tftp> bin
tftp> put OpenWrt_b3.bin

If you have a command line version of tftp for Windows it should go pretty much the same way. Just make sure you transfer the file in binary mode (that's what the "bin" command did). Once the firmware has uploaded, your router should automatically reboot. If you uploaded the OpenWRT firmware like I did above you can then telnet into your box (telnet 192.168.1.1). If you uploaded the stock Linksys firmware you should be able to get to your router with your web browser (http://192.168.1.1/).

Now put your router back together by reversing the instructions in this tip. You are triumphant and there will be much rejoicing.

NOTE1: In my guestbook "Westy" said he had to short pins 16-17 rather than 15-16 on his WRT54G-FR but the rest of this document worked.

Further Reading:
Linksys GPL Firmware page (Thank you Linksys and thank you Richard Stallman (GPL)!!)
Original instructions at OpenWRT forums
OpenWRT Web Site (my wireless web server runs it)
My revival thread (Thanks Jim!)
My wireless web server thread
Jim Buzbee's Linux on WRT54G page
Seattle Wireless WRT54G page
P.S. SVEASOFT and Windows are not supported here.

Have fun!

Forums

Valid HTML 4.01!

OpenWrt

Monday, September 19, 2005

Spamming going nastier!

A lot of junk emails I receive everyday. I use mostly my Yahoo account for personal emails because their Spamfilter is pretty cool. But recently, I received some weird emails which originated from my blogger!

Google's blogger has a feature to forward any posting on our blogger webpage, kinda notification message. But now this good feature has been abused by spammers to send their trashes! They click "comment" on some of my postings and put trash messages, hence I get notifications in my email.

I think I will just disable this notification, leaving alone the trash on the blogger's comments.

Broadband Penetration

Last week I borrowed a book from my professor titled Broadband Services: Business Models and Technologies for Community Networks. My professor is one of the editor.
After flipping some pages, I starred on one page that has a diagram showing the top 15 countries with broadband penetration. Do you know what? The United States is only in the 11th place.

The number of most broadband-penetrated country is South Korea, followed with Hong Kong, Canada, Taiwan, Denmark, Belgium, Iceland, Sweden, Netherlands, Japan, USA, Austria, Switzerland, Singapore and the last one is Finlad.

Tweaking Linksys Router (Part 3)

Last night I tried to reprogram my new Linksys WRTG54GS with different firmware (I think it is Alchemy version (?)). It uploaded with no problem, but when I tried to reboot it, it just hung!

Gosh, I just wasted my $90 something for the now-turned-to-useless-brick router (well, not really, because I am going to return it to BestBuy and claim that it does not work at all. May be I'll just exchange with WRT54G as this model has a lot more different firmwares.

I'll post again once I get the exchange and success on upgrading the firmware.

Use Your Unused PC Power to save Life

Many PCs around the world, especially in office buildings, are left idle when their users or employees are not using them. Many of them quite powerful PCs (such as Pentium 4, AMD or even the 64-bit version and/or with dual core or multiprocessors). Why don't we use them for something useful?

There is a project at Stanford University under project Folding@Home that's trying to simulate proteins (interactions, quantum physics computing etc.). All of this computation requires very huge computation power. The group of Biochemistry, biophysics, chemistry, biology, physics and computer science experts led by Prof. Pande (called 'Pande Group') at Stanford has come up with a novel technology and idea: distributed computing (computational grid using clusters).

Instead of using single supercomputer to compute, the software distributes chunks of work to many computers connected to the server over the Internet. This chunk of work is called Work Unit. Every user who wants to contribute his/her computation power can subscribe and download the software (people can also affiliate to any groups or even make their own group). The software that's running on every user will then download workunits from Stanford University's server and does the computation. After the workunit has been done, the computer then send the result back to the server.

The software can use up to 100% utilization of the PC, but we can adjust how much it is allowed to consume (or even when the computer is idle only). There are different versions of software: console-only and screen-saver; available on Windows, Linux and PowerPC (MacOS). Unfortunately, Solaris has not been supported.

Actually, there is another project that came before Folding@Home (not sure which one came first). I used to run this program under project SETI@Home from UC Berkeley (?). But after sometime, I was thinking, why should I run such a useless program to find extra terrestrial (ET)? (Off the record, I don't believe there is another creatures in outerspace. Even if they exist, not with intelligence as human being have).

So, instead of leaving your PCs idle for many hours (do the math: you leave at 6:00 PM from work and come back next day at 8:00 AM. The PC is idle for 14 hours!), you participate to a project that can find cures for some diseases.

Please check it out at: http://folding.stanford.edu

Sunday, September 18, 2005

AMD Clock and Voltage (part 2)

It's been awhile after I posted the first blog on the subject (AMD-64 on Compact Presario R3000). I just want to update the status on this issue: I one day checked AMD website and found out an information about "Windows XP incorrectly reports CPU clock". Overthere, they suggest users to upgrade the laptop BIOS. I followed their instructions and reboot my laptop.

Do you know what happened after my Windows XP rebooted? the CPU went to full clock speed and never scale back. The fan started to spin after a few seconds later and kept running. I checked CPU temperature (via 3rd party software I downloaded for free from the Net), it showed temp. close to 90 degrees Celsius! After a few minutes the computer turned off by itself (not a gracefuly shutdown!). I was like that no matter what I changed on the powersaving settings on Windows. Damn! (well, there was workaround. I used another software to manually control the CPU frequency back to 800 MHz to prevent it to overheat and shutdown). I checked on Microsoft site, the rebooting problem was caused by overheating.

I almost gave up (and did not use Windows during the time, but reboot to Linux partition. Linux successfuly control the acpi with no problem, besides it never reboots my computer). Finally, I give a try to download the latest AMD driver and reinstalled on my Windows. Somehow it now worked! (I am pretty sure I did exactly the same a few time before with no luck. So I believe there must be a fix in the newer driver).

Now PC is working OK. The frequency could scale up to its maximum (2.2 GHz). There is only 3 stages of frequencies: 800 MHz (base clock), 1.8 GHz and 2.2 GHz. The fan would turn on after a few seconds of CPU running at maximum clock and the temperature stabilizes around 80 degress Celsius, hence the Windows never reboots.

Phew!

Tweaking Linksys Router (Part 2)

OK, now I have upgraded my router's firmware to this fancy firmware: v4.70.6_Hyperwrt-2.1b1-(Thibor). Unfortunately, after looking around on the menus, I still could not found the wireless bridge feature. But, there is a cool feature there: telnet!

I then enabled telnet daemon and telnet into it. Do you know what's inside? Linux!! Wow, Linksys uses Linux for its router products!!. I then checked some stuff there. Pretty cool! Allright, maybe my next hacking project is how to hack the source code (it's available on linksys.com page, under GPL download). The source code is really huge, it is more than 100 MB even after zipped! Perhaps there are some binary firmwares inside it. I'll check it out later and report it on this blog once I am done checking out this amazing product.

(PS: I now highly recommend people to buy Linksys routers. I used to suggest people not to buy Linksys routers as their products relatively more expensive. But now, it is really worth it to invest a little bit for the really big thing you will get!)

Tweaking Linksys Router's Firmware

I just bought a Linksys wireless router WRT45GS. It is a wireless broadband router with speedbooster. According to the box, the speedbooster feature might boost speed up to 35%.
I buy this router because I want to make another 'wireless island' at home, and also because my phone cable that is going to the DSL modem has been severely degraded (when I check it, there is the insulator was torn, so the wires were almost broken). Why I use long phone cable to connect to my DSL modem? That's because there is no phone jack in the room where I put my desktop computers (which use CAT5 wires to connect to the router).

I bought the modem from Best Buy, which gave $20 rebate. Not bad for such a 54 Mbps wireless router, I said. But when I tried to configure it, I could not find anything on the web menu saying that the router is able to connect to another router via wireless connection. I was so dissapointed. I then went to the Internet and searched for information about this 'wireless bridging'. There are some hits, many links pointed me to the hardwares (such as Linksys Wireless Bridge). But this is not what I want!

After spent some more time to refine the search keyword, I eventually ended up to some discussion websites. Interestingly, these discussions mention about to upgrade the Linksys official firmware to the modified one! After going go Linksys website, I found out that for many of their products, they also provide the GPL source code for their products. Amazing! This modified firmware supports a feature called "WDS" (don't remember the abbrs., but is is something like wireless client).

The following links are few of them that have the information/and firmwares:

http://www.hyperwrt.org/

www.dd-wrt.com

http://sourceforge.net/projects/wifi-box

http://www.linksysinfo.org

http://wrt54g.thermoman.de/



Some of them require free subscription.

Monday, September 12, 2005

Next Generation Optical Transport?

Currently, optical transport (both SONET and SDH) technology used in service providers have reached its maturity and seems have stagnated. There is really no new features implemented on this technology. Most technology vendors have implemented features such as BLSR, UPSR, Bridge-and-roll, Ethernet over SONET (using GFP), even DWDM.

But, if we see the distribution of bandwidth used by customers, seems there is still a lot of 'dark' fiber, in other words, much of the bandwidths are left unused. Are these slots are really unused? If we see, in United States itself, High Definition TVs have not reached a point that many people expect. What about the Internet? well, eventhough most of the service providers (telephone, Internet or TV cable) give broadband accesses, the numbers are not satisfactory.

Now it is coming Packet-Over-ADM using optical connection. Many people are expecting this will be the next generation optical transport. Even some vendors have already had thse PADM-enabled equipments.

Technology-wise it is next generation optical transport, but what about business-wise? Well, let see what is going to happen in near future.

Sunday, September 11, 2005

My Book Collections on Linux

A few months ago I bought another Linux-related book titled Linux Kernel Development by Robert Love. Very good book indeed! I have read the first few chapters of the book and couldn't stop to finish it (well, it is hard for me as I need to spare my time to read a lot other books and documents too).
This book add collection to my Linux bookshelf, besides Linux Device Driver, Linux Kernel, and Linux Wireless.

Friday, June 17, 2005

KDE or GNOME?

I have been questioning myself about which GUI is best fit to my GNU/Linux. From user perspective, KDE is more comprehensive and complete. It supports many things and structured, because it uses QT from trolltech.com. The graphics itself is really good. On the other hand, GNOME is very open and easier to compile and also it comes from GNU with GNU Public License (GPL) and also seems more stable.

I have tried to compile my KDE from scratch and enabled many optimizations on my Pentium4 laptop (-O3, -mSSE2, -MFPMATH=SSE, -march=pentium4 -mthreads etc.). First I compiled my QT. This QT depends on CUPS, mySQL and some other modules to compile. After compilation, my KDE applications were not stable anymore. The printing did not work, my KMail sometimes crashed and so on.

While with GNOME, I compile some applications such as GAIM which required me to compile some GNOME modules and still it worked fine.

The total size of the modules are also less on GNOME than KDE. While in KDE, there is a lot of applications and modules, GNOME seems is more thin and fits to lower-end PCs.

Anyway, Linux community must decide which GUI they want to use for desktop Linux. Otherwise, they will be crushed by Apple's Mac OS which has decided to pulling their steer towards Intel processors. Yes!, they have decided for their next Mac computers to use Intel x64 dual or quad processors.

Sunday, June 5, 2005

CORBA header

The CORBA messages are carried over TCP. Inside, each message consist of message header and message data.

The structure of message header is :

char[4] magic;
octet[2] GIOP_version;
octet flags;
octet message_type;
unsigned long message_size;

First 4 octets contain "GIOP", then followed with 2 octets of GIOP version (high byte is for Major version, low byte is for minor version).

flags (in GIOP 1.1, 1.2, and 1.3) is an 8-bit octet. The least significant bit indicates the byte ordering used in subsequent elements of the message (including message_size). A value of FALSE (0) indicates big-endian byte ordering, and TRUE (1) indicates little-endian byte ordering. The byte order for fragment messages must match the byte order of the initial message that the fragment extends.

The second least significant bit indicates whether or not more framents follow. A value of FALSE (0) indicates this message is the last fragment, and TRUE (1) indicates more fragments follow this message. The most significant 6 bits are reserved. These 6 bits must have value 0 for GIOP version 1.1, 1.2, and 1.3.

message_type indicates the type of the message; these correspond to enum values of type MsgType.

message_size contains the number of octets in the message following the message header, encoded using the byte order specified in the byte order bit (the least significant bit) in the flags field (or using the byte_order field in GIOP 1.0). It refers to the size of the message body, not including the 12-byte message header. This count includes any alignment gaps and must match the size of the actual request parameters (plus any final padding bytes that may follow the parameters to have a fragment message terminate on an 8-byte boundary).


MsgType definition:

enum MsgType_1_1 {
Request,
Reply,
CancelRequest,
LocateRequest,
LocateReply,
CloseConnection,
MessageError,
Fragment // GIOP 1.1 addition
};

So, message_type may containt 0 for "Request", octet 1 (00000001) for Reply an so on.

Forget My Ignorance! (answer to Bad Documentation...)

I am supposed to write this blog a while ago, but with all workload and family life, no time to update this blog. Anyway, a few days after I wrote the blog "Bad Documentation of Compaq Products", I went to CompUSA. A salesperson approached me and asked what he could help.
I told him that I was looking for FireWire cable for my new Compaq laptop. He then brought me to a rack full of different kind of cables. I told him that I had bought one (4-pin to 4-pin), but somehow could not fit to the laptop.

He asked me what kind of laptop I have. I explained to him that it is Compaq Presario R3000. He then brought me to computer section and we found the kind. He then tried the cable. In the beginning, it could not go all the way to the socket. He then carefully reinserted, and amazingly it fitted!.

I was stunned! How could this cable fits to the connector, while mine does not. Rushing back home (after saying to the salesperson that I will buy the cable if mine still does not fit), I carefully inserted it. First, it's kinda hard. I tried again, but now really align it and with little force. Voila!, it worked!

So, with this blog I would like to revise my previous blog. But, still Compaq does not provide a complete detail about some parts of the laptop.

Sunday, May 8, 2005

Bad Documentation on Compaq Products

I had been wondering why my old video camera's I.LINK (or Firewire or IEEE 1394, either one as they are identical) cable could not fit to my Compaq Presario R3000z port. I thought, "aah...o course it does not fit, the other end that goes to the camera is Female 4-pin, while another end that goes to PC has 6-pin. I getta buy a 4-pin to 4-pin cable then".

I rushed to BestBuy (which is the closest electronic superstore from my home) and found one made by Sony. Suprised with its higher-than-expected price, I asked a salesperson overthere whether there is another option. He told me to go to Cable & Networking aisle. Gotcha! I found one with 10 bucks less than the Sony one (it's $32 or something, forgot). The cable is 4-pin M to 4-pin M.

Getting back home, I opened the box and then tried to connect it to the laptop. Suprisingly I could not make it!. After checking the pins and the shape I then realize the cable's connector is slightly in difference shape than the the laptop'. Damn! I had wasted 35 bucks for useless cable.

I went to compaq.com, but could not find anything related to this issue. I then went to google. After skipping some pages, I spot a page from hp.com documenting the laptop (forgot what's the title of the PDF file). But still could not figure out what kind of connector it requires.

I really am dissapointed to see a big company as HP misses this small-but-important thing. Their product documentation is bloody bad too. Hope they change that. I like their products (I have HP printer and later this R3000z). But the recent problem affects my rating of their quality to about below-average in term of documentation.

Tuesday, May 3, 2005

Next Generation DVD battle

There are two groups of next generation DVD which are battling to win as standard for next generation DVD. One group with Sony and Philips as ones of its members is proposing Blu-Rays, another one while Toshiba and others is proposing HD-DVD. To summarize the differences between them, I put a table here:

HD-DVD
Standardization Body www.dvdforum.org
Key Hardware Supporters Toshiba, NEC, Sanyo
Key Hollywood Studio Backers Paramount, New Line, Universal, Time Warner
ROM Capacity 15 GB (SL), 30 GB (DL)
Laser Pickup Blue Laser
Numerical Aparture of Lens 0.65
Design Emphasis Download compatibility with existing DVD standard
CODEC MPEG-2, ITU H.264 (AVC), VC-1
Copy Protection Advanced Access Content System (AACS)
Interative Software Based on derivative of Microsoft's MSTV

Blu-Ray
--------
Standardization Body www.blu-raydisc.com
Key Hardware Supporters Sony, Hitachi, LG, Mitsubishi, Panasonic, Pioneer,
Philips, Samsung, Sharp, Thomson, Apple, Dell, HP
Key Hollywood Studio Backers Sony Pictures, Tri-Star, Disney, MGM
ROM Capacity 25 GB (SL), 50 GB (DL)
Numerical Aparture of Lens 0.85
Design Emphasis Larger Storage Capacity
CODEC MPEG-2, ITU H.264 (AVC), VC-1
Copy Protection Advanced Access Content System (AACS)
Interative Software Based on Java-based MHP/GEM

Sunday, May 1, 2005

AMD Clock and Voltage

After almost a month having the laptop, I have not been able to make my R3000Z laptop utilize the maximum clock frequency and voltage allowed on the CPU. So far, I have downloaded AMD PowerNow Deskboard to monitor CPU utilization, clock and its voltage. It keeps using 36% of the maximum CPU speed. The voltage has been hanging at 1.1 volts. Although, this really saves the batteries alot (90%).

I tried Rightmark CPU Clock Utility. I could pump up the clock and voltage automatically, or by manual. By so far, why the pre-installed AMD driver could not adjust frequency and clock to what I want. I tried to overload the CPU with a lot of processes, but still no change.

Sunday, April 10, 2005

Compaq Presario R3000Z

Last week my order of new laptop I ordered from hpshopping.com arrived. It is Compaq Presario R3000Z with AMD64 3400+ MHz processor, WinXP home edition, 1 GB RAM, 80 GB 5400 RPM harddisk, 12 cell battery, 54g broadcom wifi + bluetooth, 64 MB Nvidia 440 Go, and 12680x1050 screen resolution. Not bad at all for total 1460 USD which I got 12 months interest-free loan from HP. The price is Employee Promotion Program as my employer has special with HP for their employees.

I did some tests such as leaving the closing and reopening the lid, ethernet (wired) link and wifi. They went OK, although sometimes the wifi interfered with my Linksys PCMCIA card that was on my another laptop (Tecra 8000. See my other blog about where and when I got this free laptop :-).

After a few days trying and installing some applications, I then repartitioned the harddisk using Norton's Partition Magic 8.0. I partitioned about 12 GB for Linux, 512 MB for its swap disk. I used Novell SuSe Linux Pro 9.2 for the Linux. Most of the hardware got detected (I was suprised that the touchpad even did work. I read somewhere on the Internet, the guy said it was kinda hard to make the touchpad work, but it was OK on my Linux. Perhaps because I use SuSe Linux 9.2?). For the wifi, as usual Broadcom does not provide the driver for Linux, but thanks to Ndiswrapper it could use the driver from Windows instead. But I still have problem even with ndiswrapper. Somehow, dhcp client could not get IP address, although the Wi-Fi recognized and found my Access Point (which is Netgear 802.11b) and found its MAC Address. I am not sure whether this is because of the kernel I use (oh, forget to mention that SuSE 9.2 comes with kernel 2.6.9, so I upgrade the kernel to the latest at this time, 2.6.11.7)

When I compiled the kernel and ndiswrapper, I used all the processor-specific optimizations by defining the environment variable CFLAGS and CPPFLAGS. My CFLAGS contains "-mcpu=athlon64 -mfpmath=sse -msse3 -m3dnow -m64 -mthreads", so is CPPFLAGS. No errors during compilation and everything went well. But I was still wondering why gcc could not recognize -m64 when I tested by creating a small C program and compiled it with CFLAGS as above?

The laptop's software packages come with Microsoft Works (not bad for many daily simple uses), Microsoft Money 2005 Standard Edition, InterDVD 4, RecordNow! CD&DVD burning software, Muvee Autoproducer and some other programs. Yes, I custom ordered the laptop with dual layer DVD writer.

The most I like from the laptop is its screen. It's so cool!. It is WSXGA 15'4 in screen, so it is so bright and displayed fonts are so crispy and bold. Even on Linux (which I set the resolution also to its maximum allowed, 1680x1050), it really rocks!

The processor uses powerNow technology from AMD. During normal use (which takes most of the time), the clock is adjusted to low (about 700 MHz). According to AMD spec, the maximum clock on Athlon64 3400+ Mobile is 2.2 GHz. But I haven't been able to test to make the processor reach its maximum speed. I think when it is rendering video it may go to that speed. When the CPU increases its CPU clock, the laptop gets hotter and this will trigger the CPU fan to blow the hot air. So far, I have not experienced the laptop goes too hot. May be if I am running Doom3 or Half-Life 2?

What think I don't like from the laptop is its weight. It is 2 times heavier than my IBM T40 laptop at-work. Well, what the heck, I don't really intend to use this laptop for mobile, but mostly to make me more flexible wanding around with it and still be able to do my hobby (hacking :-)

Overall, I love this laptop. I really recommend this for people are seeking a balance between budget and performance. Besides, it supports 64-bit processing, so when Microsoft finally releases its WinXP officially we still are not behind and not need to upgrade our hardware so the workhorse can still be used for few years (until IA32 finally obsoletes and a new architecture comes).

Saturday, April 2, 2005

How Physicists contributed to the Internet

Many people don't realize that their daily internet life has been affected more or less by works from physicists. Do you know who is the inventor of word-wide-web (HTTP, HTML and URL things)? Tim Berners-Lee, and he was a physicist working at CERN in europe. One of the gurus in TCP? Van Jacobson, who was working in high energy physics at Berkeley's Lawrence Laboratory.

I believe many physicists did contribute for the improvement and inventions of the current TCP/IP and other protocols currently in use.

Wednesday, March 23, 2005

Computer Reuse Through Linux

One day I found a laptop dumped in recycle bin at my office. It is Toshiba Tecra 8100 with Pentium3 450 MHz, 64-MB RAM and 12 MB harddisk. It had Windows NT in it. I took it home, reformatted it with my SuSE Linux 9.2. Well, the memory seemed not enough, so I went to ebay.com and found somebody was selling the 128 RAM SIMM100 for around 20 bucks. Not bad, I though. So I bought it and installed it on the laptop.

I got 192 MB now so I could run the GUI (I use KDE, but also added many GNOME libraries to run GNOME-based applications). I then downloaded the latest kernel available at that time (2.6.10). I also bought Cisco Linksys PCMCIA WLAN card (WPC54 SpeedBooster). Unfortunately, Linksys has not provided the driver for Linux yet, but luckily there is ndiswrapper downloadable somewhere. So I copied the driver for Windows to my Linux, run the ndiswrapper and ...voila, the wireless card worked. Well, still had problem here. Apparently, there was a conflict between ndiswrapper and ndiswrapper. I rebuilt the kernel and disabled sound drivers, but still sporadically the ndiswrapper did work very well (sometimes, the WLAN lost connection). For your info, I rebuilt them with specific processor criterias enabled, such as mcpu=pentium3 -msse and mfpmath=sse.

A few days I go, I gave a try to use kernel 2.6.11.5. I even rebuilt Krolltech's QT and many libraries. After many days of recompiling, I successfuly made the wireless work with sound drivers. I was one of the happiest days in my life making reuse the old laptop. I have been using the laptop for many of my daily activities, including browsing, reading emails and even a lightweight server. Yes, it is a server. Imagine if I use Windows for this purpose, I might have burned the laptop to the hell for its slowiness.

The laptop has SSH server, FTP, Telnet and many other services. I even also connect my external USB harddrive, thus I got additional 12 GB of space for storage. Not bad at all.

At work, I also partition my other laptop (it is IBM T30 with 512 MB RAM and 40 GB of total space). I parition 6 GB for Linux, and the rest for Windows 2000. You know what? I ended up using Linux for my work activities almost everyday. Linux is really cool, and I have learned a lot about many things because of opensource applications and tools available from the internet.

I really thank people outthere who have developed such great operating systems, applications and tools.

Thursday, March 3, 2005

Got Answer from one of the 'Hacker'

A few weeks ago I modified an article about SHA-1 on www.wikipedia.org by adding a link to another page telling a brief biography about one of the SHA-1 hackers, a chinese researcher name Xiaoyun Wang. After a few minutes, somebody removed the link and even the new page I added due to infringement of copyrighted materials.

I was suprised, but then I sent email to the researcher asking wether she objects my writing. I got a reply few days later saying that her team and the university (Shandong University of China) are going to create a new website dedicated to this security stuff. Well, she did not really answer my questions, but at least I got a response from an expert and Ph.D in security.

Let's wait and see how their website and papers will look like.

Rebuilding KDE made Easy!

After a few months not checking KDE website (www.kde.org), two days ago I revisited the site and found an interesting tool for KDE 3.3.2: Konstruct. The tool is easy to use and is designed to build (checking components and to download the missing ones, configure them, compile and link the whole libraries and component).

The only command I needed to execute is:

cd konstruct/meta/kde; make install

So easy to build now. If I recall, it was giving me hard time to recompile my KDE (it was 3.3) on my IBM Laptop T30. I had to download all *.bz2 files (plus qt-x11 libraries), extract them, reconfigure and compile one by one.

I am still having problem though when compile them on my 'free' Toshiba Laptop Tecra8100. Somehow, one component (Kppp) complains about 'regfree' and some other procedures altough I have double checked the Kpp*.cpp has "#include ". Anybody knows how to resolve it?

Friday, February 25, 2005

Crack in SHA-1 code 'stuns' Security Gurus

Three chinese researchers said on February 14, 2005 that they have compromised the SHA-1 hashing algorithm at the core of many of today's mainstream security products.

In the wake of the news, some cryptographers called for an accelerated transition to more robust algorithms and a fundamental rethinking of the underlying hashing techiques.

"We've lost our safety margin, and we are on the edge," said William Burr, who manages the security technology group at the National Institute of Standards and Technology (NIST).

"This will create big waves, in my opinion," said the celebrated cryptographer and co-inventor of SHA-1 (Shamir hashing Alg.), Adi Shamir. "This break of SHA-1 is stunning," concurred Ronald Rivers, a professor at MIT who co-developed the RSA with Shamir.

RSA is a public-key cryptosystem for both encryption and authentication; it was invented in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman [RSA78]. Details on the algorithm can be found in various places. RSA is combined with the SHA1 hashing function to sign a message in this signature suite. It must be infeasible for anyone to either find a message that hashes to a given value or to find two messages that hash to the same value. If either were feasible, an intruder could attach a false message onto Alice's signature. The hash functions SHA1 has been designed specifically to have the property that finding a match is infeasible, and is therefore considered suitable for use in this role.

One or more certificates may accompany a digital signature. A certificate is a signed document that binds the public key to the identity of a party. Its purpose is to prevent someone from impersonating someone else. If a certificate is present, the recipient (or a third party) can check that the public key belongs to a named party, assuming the certifier's public key is itself trusted. These certificates can be held in the Attribution Information section of the DSig 1.0 Signature Block Extension and thus passed along with the signature to aid in validating it. (See section Attribution Information section in the DSig 1.0 Specification.)

The signature section of the DSig 1.0 Signature Block Extension is defined in the DSig 1.0 Specification. For the RSA-SHA1 signature suite, the signature section has the following required and optional fields.

Who are these three chinese researchers? One of the member, Lisa Yin was a Ph.D student who studied under Ronald Rivest (RSA inventor) at MIT. Another one was responsible for cracking the earlier MD5 hashing algorithm (also developed by Rivest in 1991) which happened in August 2004.

To learn more about MD5, please visit http://en.wikipedia.org/wiki/MD5. For RSA: http://en.wikipedia.org/wiki/RSA, and for SHA-1: http://en.wikipedia.org/wiki/SHA-1

The open-source code version of the algorithm can be found in http://www.cr0.net:8040/code/crypto/sha1/. Samir et.al published their paper at ACM forum: The RSA Encryption Algorithm, R.L. Rivest, A. Shamir, L.M. Adleman, "A method of Obtaining Digital Signatures and Public-Key Cryptosystems", Communications of the ACM, v. 21, n. 2, Feb. 1978, pp 120-126.

Electronics, biology: twins under the skin

By Chappell Brown
EE Times
February 09, 2004 (10:33 AM EST)

Like the twin strands of a double helix, electronics and biotechnology are joining forces in a technological explosion that experts say will dwarf what is possible for either one of them alone.

Hints of this pairing can be seen in the economic recovery that's now taking hold. One peculiarity that hasn't grabbed many headlines is biotech's role in pulling Silicon Valley out of its three-year slump. A report last month from the nonprofit organization Joint Venture: Silicon Valley Network points up this fact, showing that venture funding in biotech startups rose from 7 percent in 2000 to 24 percent last year while investment in information technology startups fell from 10 percent to 4 percent over the same period. The immediate question is whether this is a temporary anomaly or the emergence of a major trend.

Certainly computers, biochips, robotics and data sharing over the Internet have been important tools in accelerating biological and medical research, and it should be no surprise that new application areas and markets would grow around them. The view from inside the engineering cubicle might be something like, "Yes, we have created a revolutionary technology that creates new markets-biomedicine is simply one area that benefits from advances in VLSI."

But a long-term perspective suggests a tighter linkage between electronics technology and molecular biology. Indeed, it could be argued that the second half of the 20th century forged not one but two digital revolutions, fueled by two fundamental breakthroughs: transistorized digital computers and the cracking of the genetic code. The latter advance showed that the genome was transmitted through the generations by means of digital storage in the DNA molecule.

In the following decades, both developments matured at an increasingly rapid pace. Digital circuits were inspired by crude models of the nervous system (see story, below). Although the models turned out to be wrong in many respects, technologists discovered that digital representation brings the advantages of simplicity, stability and an ability to control errors. Those same properties have made DNA the viable and stable core of living systems for billions of years.

But the nervous system is only one component of the body that is encoded in DNA, which somehow not only represents the information for building the basic components of cells, but also encodes the entire process of assembling highly complex multicellular machines. The growth process is an amazing feat of bootstrapping from the genetic code to functioning organisms. Essentially, an organism is a molecular digital computer that constructs itself as part of the execution of its code.

Leroy Hood, director of the Institute for Systems Biology (Seattle), believes that science aided by computers and VLSI technology will achieve major breakthroughs in reverse-engineering the cell's assembly processes. The fallout will be new circuit and computational paradigms along with nanoscale mechanisms for building highly compact molecular computing machines.

"There will be a convergence between information technology and biotechnology that will go both ways," said Hood. "We can use new computational tools to understand the biological computational complexities of cells, and when we understand the enormous integrative powers of gene regulatory networks we will have insights into fundamentally new approaches to digital computing and IT."

But cell machinery can also be enlisted in the kind of nanostructure work that is currently done manually with tools such as the atomic-force microscope. "The convergence of materials science and biotech is going to be great, and we will be able to learn from living organisms how they construct proteins that do marvelous things and self-assemble," Hood said. "There will be lessons about how to design living computer chips that can self-assemble and have enormous capacity."

Hood is credited with inventing the automated DNA-sequencing systems that were the first step in accelerating the decoding of the human genome. Accomplished two years ahead of schedule thanks to many enhancements to the process, including MEMS-based microfluidic chips, the achievement has stimulated efforts to take on the far more complex task of decoding protein functions.

Hood's institute, which was founded in 2000, is one example of a wave of similar organizations springing up across the United States. The idea is to engage a diverse group of specialists-mechanical and electronic engineers, computer scientists, chemists, molecular biologists-in the effort to decode the cellular-growth process. Stanford University's BIO-X Biosiences Initiative, for example, is dedicated to linking life sciences, physical sciences, medicine and engineering. The Department of Energy's Pacific Northwest National Laboratory has its Biomolecular Systems Initiative, Princeton University its Lewis-Sigler Institute for Integrative Genomics. Harvard Medical School now has a systems-biology department, and MIT has set up its Computational and Systems Biology Initiative (CSBi).

Proteins have remarkable chemical versatility and go far beyond other molecules as chemical catalysts, said Peter Sorger, director of MIT's CSBi. But applications of their properties will have to contend with a difficult cost differential between medical and industrial products.

"Using proteins as catalysts was the absolute beginning of the biotech industry. We know that proteins are the most extraordinary catalysts ever developed. The problem is that most of the chemical industry is a low-margin business and biology has historically been very expensive," Sorger explained.

While organic catalysts derived from oil are not as efficient, the low cost of producing them has kept proteins out of the field. "Most of the applications of proteins to new kinds of polymers, new plastics, biodegradable materials, etc. have all been limited by the fundamental economic problem that oil is so darn cheap," he said. "As a result, bioengineered materials are only used in very high-end, specialized applications."

However, Sorger believes that such bioengineered products will arrive, probably first in biomedical applications, which will then spawn low-end mass-market products. He used the example of Velcro, which was devised as an aid to heart surgery and later became a common material in a wide range of commercial goods. Sorger is looking forward to nanotechnology applications, the assembly of materials and circuits using biological processes, as the first direct applications of protein engineering outside of the biomedical field.

Sorger cited the work of MIT researcher Angela Belcher as an example of the technological spin-offs that will come from attempts to understand cellular processes. Working in the cross-disciplinary areas of inorganic chemistry, biochemistry, molecular biology and electrical engineering, Belcher has found ways to enlist cellular processes to assemble structures from inorganic molecules. By understanding how cells direct the assembly of their structural components, Belcher is finding ways to assemble artificial materials from inorganic nanoclusters that can function as displays, sensors or memory arrays. Another interdisciplinary group at MIT is putting together a library of biological components that engineers could used to build artificial organisms able to accomplish specific nanoscale tasks.

Underlying the excitement surrounding the merger of digital electronics systems and molecular digital organisms are the dramatic capabilities of lab-on-a-chip chemical-analysis systems, automated data extraction and supercomputer data processing. These technologies are part of what made it possible to sequence the entire human genome. A benchmark for the rapid progress promised by those tools may be the announcement by three biotech companies late last year of single chips containing the human DNA molecule in addressable format-the human genome on a chip. That might compare to the advent of of the CPU-on-a-chip, which catalyzed the VLSI revolution in the mid-1970s.

The barrier to moving this capability forward lies in the physical differences between DNA and the proteins it codes. Proteins are built from DNA sequences as linear sequences of amino acids that then spontaneously fold into complicated 3-D shapes. And the process becomes more complex as proteins begin to interact with one another. For example, there is a feedback loop in which proteins regulate the further expression of proteins by DNA. As a result, there are no parallel fluidic-array techniques to accelerate the analysis of protein families. "These technologies have a long way to go. I don't see any fundamental breakthroughs [in protein analysis] in the next few years, but in 10 years, who knows?" said Steven Wiley, director of the Biomolecular Systems Initiative at Pacific Northwest National Laboratory. "There are a lot of smart people out there working on this."

The fundamental challenge is the dynamic aspect of protein function. "DNA is static; once you sequence it, you have it," Wiley said. But proteins "are constantly interacting, so you have to run multiple experiments to observe all their functions and you end up with multiple terabytes of information. So, how are you going to manage and analyze all this information?"

But the excitement generated by recent successes with the genome is contagious. Plans are afoot to decode the "language" of proteins, making their functions widely available to engineers; anyone with a personal computer and a modem can access the human genome over the Internet; lab-on-a-chip technology continues to reduce the cost of bioexperimentation while ramping up throughput. And there is venture capital funding out there.

Tech trends will topple tradition

Tech trends will topple tradition

By Ron Wilson
EE Times

January 10, 2005 (9:00 AM EST)


OUTLOOK 2005
Where to look for earthshaking technology developments? Probably the best place to start is with the roadblocks that appear to stand in the way of traditional progress. There seem to be three of them, which the industry is approaching at considerable velocity. One is the diminishing progress in making CPUs faster. Another is the inability of manufacturing to keep up with the exponential growth in the complexity of systems. And the third is the seemingly insurmountable barrier between microelectronic and living systems.

For several years, there has been a grassroots movement to perform supercomputing problems on multiprocessing systems in which "multiple" means thousands, or even millions. Welcome to the world of peer computing.

The concept is disarmingly simple. There are millions of PCs, workstations and servers in the world, most of which sit unconscionably idle most of the time. If pieces of an enormous computing task could be dispatched over the Internet to some of these machines — say, a few tens of thousands — and if the pieces ran in the background, so that the users weren't inconvenienced, a lot of computing work could be done essentially for free.

This is exactly the way the Search for Extraterrestrial Intelligence (SETI) at Home project works. Most of the people who run SETI are volunteers. But there are also commercial uses of grid networks, as such Internet-linked communities of computers are known. United Devices (Austin, Texas), which provided the supervisory software for SETI, is a commercial enterprise that sells grid-computing systems to commercial clients.

Of course, there is fine print in the tale, too. One obvious issue is that massive networks of loosely coupled computers are useful only if the application lends itself to massive parallelism.

These are the applications that Gordon Bell, senior researcher at Microsoft Corp.'s Bay Area Research Center, calls "embarrassingly parallel." In the SETI program, for instance, essentially the same relatively simple calculations are being performed on enormous numbers of relatively small data sets. The only communication necessary between the peer computer and the supervisor, once the data is delivered to the peer, is a simple "Yes, this one is interesting" or "Nope." The application is ideal for a loosely coupled network of peers.

Stray from that ideal situation, though, and things start to get complicated. Bell pointed out that bandwidth is so limited in wide-area networks, and latency so large and unpredictable, that any need for tight coupling between the peers renders the approach impractical. And of course, the individual task size has to fit in the background on the individual peer systems.

Is it possible to work around these limitations? Bell was guardedly pessimistic. "After two decades of building multicomputers — aka clusters that have relatively long latency among the nodes — the programming problem appears to be as difficult as ever," Bell wrote in an e-mail interview. The only progress, he said, has been to standardize on Beowulf — which specifies the minimum hardware and software requirements for Linux-based computer clusters — and MPI, a standard message-passing interface for them, "as a way to write portable programs that help get applications going, and help to establish a platform for ISVs [independent software vendors]."

Will we find ways to make a wider class of problems highly parallel? "I'm not optimistic about a silver bullet here," Bell replied. "To steal a phrase, it's hard work — really hard work."

But Bell does point to a few areas of interest. One is the observation that peer networks can work as pipelined systems just as well as parallel systems, providing that the traffic through the pipeline is not too high in bandwidth and the pipeline is tolerant of the WAN's latencies.

Will peer networks replace supercomputers? In the general case, Bell believes not. Technology consultant and architecture guru of long standing John Mashey agrees. "Anybody who's ever done serious high-performance computing knows that getting enough bandwidth to the data is an issue for lots of real problems," Mashey wrote. In some cases, creating a private network may be the only way to get the bandwidth and latency necessary to keep the computation under control. But that of course limits the number of peers that can be added to the system. And there are also issues of trust, security and organization to be faced.

But even within these limitations, it seems likely that peer computing on a massive scale will play an increasing role in the attack on certain types of problems. It may well be that our understanding of proteins, modeling of stars and galaxies, and synthesis of human thought may all depend on the use of peer networks to go where no individual computer or server farm can take us.

Some systems are too complex to be organized by an outside agent. Others — nanosystems — may be too small to be built by external devices. These problems lie within the realm of the second technology guess we are offering, the technology of self-assembling systems. Like peer-computing networks, self-assembling systems exist in specific instances today, although much more in the laboratory than on the Web. And like peer networks, self-assembling systems promise to break through significant barriers — at least in some instances — either of enormous complexity or of infinitesimal size.

One way of looking at self-assembling systems is through a series of criteria. As a gross generalization, a self-assembling system is made up of individual components that can either move themselves or alter their functions, that can connect to each other and that can sense where they are in the system that is assembling itself. The components must do those things without outside intervention.

The guiding example for much of the work in this area is that ultimate self-assembling system, the biological organism. Not by coincidence, much of the existing work in self-assembling systems is not in electronics or robotics but in a new field called synthetic biology.

In essence, synthetic biology has tried to create (or discover) a series of amino acids that can act as building blocks for assembling DNA sequences with specific, predictable functions — DNA that will produce specific proteins when inserted into a living cell.

But according to Radhika Nagpal, assistant professor in computer science at Harvard University, the biological work is spilling over into electronics as well. Researchers are working on getting biomolecules to assemble themselves into predictable patterns while carrying along electronic components. Thus, the underlying pattern of molecules would be reflected in the organization of the electronics. Working in another direction, Harvard researcher George Whitesides has been experimenting with two-dimensional electronic circuits that can assemble themselves into three-dimensional circuits.

Much work is also being done on a larger scale, said Nagpal. Self-organizing robotic systems comprising from tens to perhaps a hundred modules have been built. While all of these projects are very much in the research arena, the individuals manning them work with actual hardware — if we can lump DNA into that category — not simply simulation models.

Nor is the work part of some futuristic scenario. "Some of it is nearer than you might think," Nagpal said.

Researchers make rat brain neurons interact with FET array at Max Planck Institute.

The nanotechnology area, though, remains longer-term. Few if any physical examples of self-assembling nanodevices exist today. But many of the principles being developed both in the synthetic-biology arena and in the work on selective-affinity self-assembly for electronic circuits may eventually prove applicable to nanoscale problems.

The final barrier for a breakthrough technology, and the one that is quite possibly the furthest away, is the barrier that separates electronic from living systems. One can envision electronic devices that can directly recognize or act upon living cells or perhaps even individual proteins. Such technology would make possible entirely new applications in medical analysis — identifying a marker protein or a virus in a blood sample, for instance — and in therapy. But the ability to directly interface electronics to cells would also make possible a long-held dream of science-fiction writers: electronic systems that communicate directly with the central nervous systems of humans, bypassing missing limbs or inadequate sense organs.

In this area too, there is science where there used to be science fiction. ICs have been fabricated for some time that are capable of sensing gross properties of chemical solutions, such as pH, the measure of acidity. But more to the point, researchers at the Interuniversity Microelectronics Center (IMEC; Leuven, Belgium) have been working on ICs that can steer individual protein molecules about on the surface of the die, moving them to a detection site where their presence can be recorded. To start the process, researchers first attach a magnetic nanobead to the protein. Then they manipulate a magnetic field to move the molecule. The detection is done by a spin-valve sensor.

Even more exciting work has been reported by IMEC and — at the forthcoming International Solid-State Circuits Conference — will be reported by the Max Planck Institute for Biochemistry (Munich, Germany). Both organizations have reportedly succeeded in fabricating ICs of their respective designs that comprise an array of sensors and transistors. The sensors can detect the electrical "action potentials" generated by neurons and the transistors can stimulate the neurons directly. Living neuron cells have been placed on the surface of the chip, stimulated and sensed. The Max Planck Institute claims to have grown neurons on the surface of a chip as well.

This is a technology of obvious potential, but with a long way to go. For one thing, the physical interface between electronic circuits and biochemical solutions — let alone living cells — is always problematic, according to Luke Lee, assistant professor of bioengineering and director of the Biomolecular Nanotechnology Center at the University of California, Berkeley. After the mechanisms have been understood and the sensors designed, there is still the problem of keeping the chemicals from destroying the chip. So even simple sensors are not a slam dunk.

Moving more delicate creations, such as neuron/chip interfaces, into production is even more problematic. One obvious issue is that the neurons you want to interface to aren't the ones you can extract and put on a chip — they are individuals among millions in a bundle of nerve fibers in a living body. But Lee pointed out that there are repeatability issues even with the in vitro work that is being reported now. It is still at the level of elegant demonstrations, not widely reproducible experiments with consistent results. "I am concerned that many people overpromise nanobiotechnology without really knowing the limitations of nano- and microfabrication," said Lee.


Alcatel & Microsoft develop IP Television

Alcatel and Microsoft Corp. announced Tuesday (Feb. 22) a global collaboration agreement to accelerate the availability of Internet Protocol Television (IPTV) services for broadband operators world-wide.

Under the agreement, the companies will team to develop an integrated IPTV delivery solution leveraging Alcatel's leadership in broadband, IP networking, development, and integration of end-to-end multimedia and video solutions, and Microsoft's expertise in TV software solutions and connected-entertainment experiences across consumer devices.

The companies believe the integrated solution can help service providers reduce deployment costs and shorten time-to-market for IPTV services as they transition to mass-market deployments of IPTV. On-demand video streaming applications, interactive TV, video and voice communications, photo, music and home video sharing, and online gaming are some services that consumers could receive through their multimedia-connected home networks.

Joint initiatives being pursed by both companies include developing custom applications to meet the unique needs of different cultures and markets globally; enhancing application and network resilience for better reliability in large-scale deployment; and integrating content, security and digital rights management to ensure secure delivery of high-quality content throughout the home.

Saturday, January 8, 2005

Lies in Scientific Researches by West

Inventions by Muslims

Roger Bacon is credited with drawing a flying apparatus as is Leonardo da Vinci. Actually Ibn Firnas of Islamic Spain invented, constructed, and tested a flying machine in the 800's A.D. Roger Bacon learned of flying machines from Arabic references to Ibn Firnas' machine. The latter's invention antedates Bacon by 500 years and Da Vinci by some 700 years.


It is taught that glass mirrors were first produced in Venice, the year 1291. Glass mirrors were actually in use in Islamic Spain as early as the 11th century. The Venetians learned the art of fine glass production from Syrian artisans during the 9th and 10th centuries.


It is taught that before the 14th century, the only type of clock available was the water clock. In 1335, a large mechanical clock was erected in Milan, Italy and it was claimed as the first mechanical clock. However, Spanish Muslim engineers produced both large and small mechanical clocks that were weight-driven. This knowledge was transmitted to Europe via Latin translations of Islamic books on mechanics, which contained designs and illustrations of epi-cyclic and segmental gears. One such clock included a mercury escapement. Europeans directly copied the latter type during the 15th century. In addition, during the 9th century, Ibn Firnas of Islamic Spain, according to Will Durant, invented a watch-like device which kept accurate time. The Muslims also constructed a variety of highly accurate astronomical clocks for use in their observatories.


In the 17th century, the pendulum was said to be developed by Galileo during his teenage years. He noticed a chandelier swaying as it was being blown by the wind. As a result, he went home and invented the pendulum. However, the pendulum was actually discovered by Ibn Yunus al-Masri during the 10th century. He was the first to study and document a pendulums oscillatory motion. Muslim physiscists introduced its value for use in clocks the 15th century.


It is taught that Johannes Gutenburg of Germany invented the movable type and printing press during the 15th century. In 1454, Gutenberg did develop the most sophisticated printing press of the Middle Ages. But a movable brass type was in use in Islamic Spain 100 years prior, which is where the West's first printing devices were made.


It is taught that Isaac Newton's 17th century study of lenses, light, and prisms form the foundation of the modern science of optics. Actually al-Haythem in the 11th century determined virtually everything that Newton advanced regarding optics centuries prior and is regarded by numerous authorities as the "founder of optics." There is little doubt that Newton was influenced by him. Al-Haytham was the most quoted physicist of the Middle Ages. His works were utilized and quoted by a greater number of European scholars during the 16th and 17th centuries than those of Newton and Galileo combined.


The English scholar Roger Bacon (d. 1292) first mentioned glass lenses for improving vision. At nearly the same time, eyeglasses could be found in use both in China and Europe. Ibn Firnas of Islamic Spain invented eyeglasses during the 9th century, and they were manufactured and sold throughout Spain for over two centuries. Any mention of eyeglasses by Roger Bacon was simply a regurgitation of the work of al-Haytham (d. 1039), whose research Bacon frequently referred to.


Isaac Newton is said to have discovered during the 17th century that white light consists of various rays of colored light. This discovery was made in its entirety by al-Haytham (1lth century) and Kamal ad-Din (14th century). Newton did make original discoveries, but this was not one of them.


The concept of the finite nature of matter was first introduced by Antoine Lavoisier during the 18th century. He discovered that, although matter may change its form or shape, its mass always remains the same. Thus, for instance, if water is heated to steam, if salt is dissolved in water, or if a piece of wood is burned to ashes, the total mass remains unchanged. The principles of this discovery were elaborated centuries before by Islamic Persia's great scholar, al-Biruni (d. 1050). Lavoisier was a disciple of the Muslim chemists and physicists and referred to their books frequently.


It is taught that the Greeks were the developers of trigonometry. Trigonometry remained largely a theoretical science among the Greeks. Muslim scholars developed it to a level of modern perfection, although the weight of the credit must be given to al-Battani. The words describing the basic functions of this science, sine, cosine and tangent, are all derived from Arabic terms. Thus, original contributions by the Greeks in trigonometry were minimal.


It is taught that a Dutchman, Simon Stevin, first developed the use of decimal fractions in mathematics in 1589. He helped advance the mathematical sciences by replacing the cumbersome fractions, for instance, 1/2, with decimal fractions, for example, 0.5. Muslim mathematicians were the first to utilize decimals instead of fractions on a large scale. Al-Kashi's book, Key to Arithmetic, was written at the beginning of the 15th century and was the stimulus for the systematic application of decimals to whole numbers and fractions thereof. It is highly probable that Stevin imported the idea to Europe from al-Kashi's work.


The first man to utilize algebraic symbols was the French mathematician, Francois Vieta. In 1591, he wrote an algebra book describing equations with letters such as the now familiar x and y's. Asimov says that this discovery had an impact similar to the progression from Roman numerals to Arabic numbers. Muslim mathematicians, the inventors of algebra, introduced the concept of using letters for unknown variables in equations as early as the 9th century A.D. Through this system, they solved a variety of complex equations, including quadratic and cubic equations. They used symbols to develop and perfect the binomial theorem.


It is taught that the difficult cubic equations (x to the third power) remained unsolved until the 16th century when Niccolo Tartaglia, an Italian mathematician, solved them. Muslim mathematicians solved cubic equations as well as numerous equations of even higher degrees with ease as early as the 10th century.


The concept that numbers could be less than zero, that is negative numbers, was unknown until 1545 when Geronimo Cardano introduced the idea. Muslim mathematicians introduced negative numbers for use in a variety of arithmetic functions at least 400 years prior to Cardano.


In 1614, John Napier invented logarithms and logarithmic tables. Muslim mathematicians invented logarithms and produced logarithmic tables several centuries prior. Such tables were common in the Muslim world as early as the 13th century.


During the 17th century Rene Descartes made the discovery that algebra could be used to solve geometrical problems. By this, he greatly advanced the science of geometry. Mathematicians of the Islamic Empire accomplished precisely this as early as the 9th century A.D. Thabit bin Qurrah was the first to do so, and he was followed by Abu'l Wafa, whose 10th century book utilized algebra to advance geometry into an exact and simplified science.


Isaac Newton, during the 17th century, developed the binomial theorem, which is a crucial component for the study of algebra. Hundreds of Muslim mathematicians utilized and perfected the binomial theorem. They initiated its use for the systematic solution of algebraic problems during the 10th century (or prior).


No improvement had been made in the astronomy of the ancients during the Middle Ages regarding the motion of planets until the 13th century. Then Alphonso the Wise of Castile (Middle Spain) invented the Aphonsine Tables, which were more accurate than Ptolemy's. Muslim astronomers made numerous improvements upon Ptolemy's findings as early as the 9th century. They were the first astronomers to dispute his archaic ideas. In their critic of the Greeks, they synthesized proof that the sun is the center of the solar system and that the orbits of the earth and other planets might be elliptical. They produced hundreds of highly accurate astronomical tables and star charts. Many of their calculations are so precise that they are regarded as contemporary. The Alphonsine Tables are little more than copies of works on astronomy transmitted to Europe via Islamic Spain, i.e. the Toledo Tables.


Gunpowder was developed in the Western world as a result of Roger Bacon's work in 1242. The first usage of gunpowder in weapons was when the Chinese fired it from bamboo shoots in attempt to frighten Mongol conquerors. They produced it by adding sulfur and charcoal to saltpeter. The Chinese developed saltpeter for use in fireworks and knew of no tactical military use for gunpowder, nor did they invent its formula. Research by Reinuad and Fave have clearly shown that gunpowder was formulated initially by Muslim chemists. Further, these historians claim that the Muslims developed the first firearms. Notably, Muslim armies used grenades and other weapons in their defense of Algericus against the Franks during the 14th century. Jean Mathes indicates that the Muslim rulers had stockpiles of grenades, rifles, crude cannons, incendiary devices, sulfur bombs and pistols decades before such devices were used in Europe. The first mention of a cannon was in an Arabic text around 1300 A.D. Roger Bacon learned of the formula for gunpowder from Latin translations of Arabic books. He brought forth nothing original in this regard.


The Chinese who may have been the first to use it for navigational purposes sometime between 1000 and 1100 A.D invented the compass. The earliest reference to its use in navigation was by the Englishman, Alexander Neckam (1157-1217). Muslim geographers and navigators learned of the magnetic needle, possibly from the Chinese, and were the first to use magnetic needles in navigation. They invented the compass and passed the knowledge of its use in navigation to the West. European navigators relied on Muslim pilots and their instruments when exploring unknown territories. Gustav Le Bon claims that the magnetic needle and compass were entirely invented by the Muslims and that the Chinese had little to do with it. Neckam, as well as the Chinese, probably learned of it from Muslim traders. It is noteworthy that the Chinese improved their navigational expertise after they began interacting with the Muslims during the 8th century.


The first man to classify the races was the German Johann F. Blumenbach, who divided mankind into white, yellow, brown, black and red peoples. Muslim scholars of the 9th through 14th centuries invented the science of ethnography. A number of Muslim geographers classified the races, writing detailed explanations of their unique cultural habits and physical appearances. They wrote thousands of pages on this subject. Blumenbach's works were insignificant in comparison.


The science of geography was revived during the 15th, 16th and 17th centuries when the ancient works of Ptolemy were discovered. The Crusades and the Portuguese/Spanish expeditions also contributed to this reawakening. The first scientifically based treatise on geography were produced during this period by Europe's scholars. Muslim geographers produced untold volumes of books on the geography of Africa, Asia, India, China and the Indies during the 8th through 15th centuries. These writings included the world's first geographical encyclopedias, almanacs and road maps. Ibn Battutah's 14th century masterpieces provide a detailed view of the geography of the ancient world. The Muslim geographers of the 10th through 15th centuries far exceeded the output by Europeans regarding the geography of these regions well into the 18th century. The Crusades led to the destruction of educational institutions, their scholars and books. They brought nothing substantive regarding geography to the Western world.


It is taught that Robert Boyle in the 17th century originated the science of chemistry. A variety of Muslim chemists, including ar-Razi, al-Jabr, al-Biruni and al-Kindi, performed scientific experiments in chemistry some 700 years prior to Boyle. Durant writes that the Muslims introduced the experimental method to this science. Humboldt regards the Muslims as the founders of chemistry.


It is taught that Leonardo da Vinci (16th century) fathered the science of geology when he noted that fossils found on mountains indicated a watery origin of the earth. Al-Biruni (1lth century) made precisely this observation and added much to it, including a huge book on geology, hundreds of years before Da Vinci was born. Ibn Sina noted this as well (see pages 100-101). It is probable that Da Vinci first learned of this concept from Latin translations of Islamic books. He added nothing original to their findings.


The first mention of the geological formation of valleys was in 1756, when Nicolas Desmarest proposed that they were formed over a long periods of time by streams. Ibn Sina and al-Biruni made precisely this discovery during the 11th century (see pages 102 and 103), fully 700 years prior to Desmarest.


Galileo (17th century) was the world's first great experimenter. Al-Biruni (d. 1050) was the world's first great experimenter. He wrote over 200 books, many of which discuss his precise experiments. His literary output in the sciences amounts to some 13,000 pages, far exceeding that written by Galileo or, for that matter, Galileo and Newton combined.


The Italian Giovanni Morgagni is regarded as the father of pathology because he was the first to correctly describe the nature of disease. Islam's surgeons were the first pathologists. They fully realized the nature of disease and described a variety of diseases to modern detail. Ibn Zuhr correctly described the nature of pleurisy, tuberculosis and pericarditis. Az-Zahrawi accurately documented the pathology of hydrocephalus (water on the brain) and other congenital diseases. Ibn al-Quff and Ibn an-Nafs gave perfect descriptions of the diseases of circulation. Other Muslim surgeons gave the first accurate descriptions of certain malignancies, including cancer of the stomach, bowel, and esophagus. These surgeons were the originators of pathology, not Giovanni Morgagni.


It is taught that Paul Ehrlich (19th century) is the originator of drug chemotherapy, that is the use of specific drugs to kill microbes. Muslim physicians used a variety of specific substances to destroy microbes. They applied sulfur topically specifically to kill the scabies mite. Ar-Razi (10th century) used mercurial compounds as topical antiseptics.


Purified alcohol, made through distillation, was first produced by Arnau de Villanova, a Spanish alchemist in 1300 A.D. Numerous Muslim chemists produced medicinal-grade alcohol through distillation as early as the 10th century and manufactured on a large scale the first distillation devices for use in chemistry. They used alcohol as a solvent and antiseptic.


C.W. Long, an American in 1845, conducted the first surgery performed under inhalation anesthesia. Six hundred years prior to Long, Islamic Spain's Az-Zahrawi and Ibn Zuhr, among other Muslim surgeons, performed hundreds of surgeries under inhalation anesthesia with the use of narcotic-soaked sponges which were placed over the face.


During the 16th century Paracelsus invented the use of opium extracts for anesthesia. Muslim physicians introduced the anesthetic value of opium derivatives during the Middle Ages. The Greeks originally used opium as an anesthetic agent. Paracelus was a student of Ibn Sina's works from which it is almost assured that he derived this idea.


Humphrey Davy and Horace Wells invented modern anesthesia in the 19th century. Modern anesthesia was discovered, mastered, and perfected by Muslim anesthetists 900 years before the advent of Davy and Wells. They utilized oral as well as inhalant anesthetics.


The concept of quarantine was first developed in 1403. In Venice, a law was passed preventing strangers from entering the city until a certain waiting period had passed. If by then no sign of illness could be found, they were allowed in. The concept of quarantine was first introduced in the 7th century A.D. by the prophet Muhammad (P.B.U.H.), who wisely warned against entering or leaving a region suffering from plague. As early as the 10th century, Muslim physicians innovated the use of isolation wards for individuals suffering with communicable diseases.


The scientific use of antiseptics in surgery was discovered by the British surgeon Joseph Lister in 1865. As early as the 10th century, Muslim physicians and surgeons were applying purified alcohol to wounds as an antiseptic agent. Surgeons in Islamic Spain utilized special methods for maintaining antisepsis prior to and during surgery. They also originated specific protocols for maintaining hygiene during the post-operative period. Their success rate was so high that dignitaries throughout Europe came to Cordova, Spain, to be treated at what was comparably the "Mayo Clinic" of the Middle Ages.


It is taught that In 1545, the scientific use of surgery was advanced by the French surgeon Ambroise Pare. Prior to him, surgeons attempted to stop bleeding through the gruesome procedure of searing the wound with boiling oil. Pare stopped the use of boiling oils and began ligating arteries. He is considered the "father of rational surgery." Pare was also one of the first Europeans to condemn such grotesque "surgical" procedures as trepanning (see reference #6, pg. 110). Islamic Spain's illustrious surgeon, az-Zahrawi (d. 1013), began ligating arteries with fine sutures over 500 years prior to Pare. He perfected the use of Catgut, that is suture made from animal intestines. Additionally, he instituted the use of cotton plus wax to plug bleeding wounds. The full details of his works were made available to Europeans through Latin translations. Despite this, barbers and herdsmen continued to be the primary individuals practicing the "art" of surgery for nearly six centuries after az-Zahrawi's death. Pare himself was a barber, albeit more skilled and conscientious than the average ones. Included in az-Zahrawi's legacy are dozens of books. His most famous work is a 30-volume treatise on medicine and surgery. His books contain sections on preventive medicine, nutrition, cosmetics, drug therapy, surgical technique, anesthesia, pre and post-operative care as well as drawings of some 200 surgical devices, many of which he invented. The refined and scholarly az-Zahrawi must be regarded as the father and founder of rational surgery, not the uneducated Pare.


William Harvey, during the early 17th century, discovered that blood circulates. He was the first to correctly describe the function of the heart, arteries, and veins. Rome's Galen had presented erroneous ideas regarding the circulatory system, and Harvey was the first to determine that blood is pumped throughout the body via the action of the heart and the venous valves. Therefore, he is regarded as the founder of human physiology. In the 10th century, Islam's ar-Razi wrote an in-depth treatise on the venous system, accurately describing the function of the veins and their valves. Ibn an-Nafs and Ibn al-Quff (13th century) provided full documentation that the blood circulates and correctly described the physiology of the heart and the function of its valves 300 years before Harvey. William Harvey was a graduate of Italy's famous Padua University at a time when the majority of its curriculum was based upon Ibn Sina's and ar-Razi's textbooks.


The first pharmacopoeia (book of medicines) was published by a German scholar in 1542. According to World Book Encyclopedia, the science of pharmacology was begun in the 1900's as an offshoot of chemistry due to the analysis of crude plant materials. Chemists, after isolating the active ingredients from plants, realized their medicinal value. According to the eminent scholar of Arab history, Phillip Hitti, the Muslims, not the Greeks or Europeans, wrote the first "modern" pharmacopoeia. The science of pharmacology was originated by Muslim physicians during the 9th century. They developed it into a highly refined and exact science. Muslim chemists, pharmacists, and physicians produced thousands of drugs and/or crude herbal extracts one thousand years prior to the supposed birth of pharmacology. During the 14th century Ibn Baytar wrote a monumental pharmacopoeia listing some 1400 different drugs. Hundreds of other pharmacopoeias were published during the Islamic Era. It is likely that the German work is an offshoot of that by Ibn Baytar, which was widely circulated in Europe.


It is taught that the discovery of the scientific use of drugs in the treatment of specific diseases was made by Paracelsus, the Swiss-born physician, during the 16th century. He is also credited with being the first to use practical experience as a determining factor in the treatment of patients rather than relying exclusively on the works of the ancients. Ar-Razi, Ibn Sina, al-Kindi, Ibn Rushd, az-Zahrawi, Ibn Zuhr, Ibn Baytar, Ibn al-Jazzar, Ibn Juljul, Ibn al-Quff, Ibn an-Nafs, al-Biruni, Ibn Sahl and hundreds of other Muslim physicians mastered the science of drug therapy for the treatment of specific symptoms and diseases. In fact, this concept was entirely their invention. The word "drug" is derived from Arabic. Their use of practical experience and careful observation was extensive. Muslim physicians were the first to criticize ancient medical theories and practices. Ar-Razi devoted an entire book as a critique of Galen's anatomy. The works of Paracelsus are insignificant compared to the vast volumes of medical writings and original findings accomplished by the medical giants of Islam.


The first sound approach to the treatment of disease was made by a German, Johann Weger, in the 1500's. Harvard's George Sarton says that modern medicine is entirely an Islamic development while emphasizing that Muslim physicians of the 9th through 12th centuries were precise, scientific, rational, and sound in their approach. Johann Weger was among thousands of Europeans physicians during the 15th through 17th centuries that were taught the medicine of ar-Razi and Ibn Sina. He contributed nothing original.


Medical treatment for the insane was modernized by Philippe Pinel when in 1793 he operated France's first insane asylum. As early as the 1lth century, Islamic hospitals maintained special wards for the insane. They treated them kindly and presumed their disease was real at a time when the insane were routinely burned alive in Europe as witches and sorcerers. A curative approach was taken for mental illness and for the first time in history, the mentally ill were treated with supportive care, drugs, and psychotherapy. Every major Islamic city maintained an insane asylum where patients were treated at no charge. In fact, the Islamic system for the treatment of the insane excels in comparison to the current model, as it was more humane and was highly effective as well.


It is taught that kerosene was first produced by an Englishman, Abraham Gesner in 1853. He distilled it from asphalt. Muslim chemists produced kerosene as a distillate from petroleum products over 1,000 years prior to Gesner (see Encyclopaedia Britannica under the heading, Petroleum).