Friday, July 20, 2007

Random thoughts on Brain

This a incomplete thoughts on human brain and would complete this article with more details in near future ...

Brain Memory structure is one of the best structure in the world.Its semi fluid state is the best form to store any information. There is a need to design computer hard disks the
same as our brain.The brain memory can be divided into different logical parts namely,
Living instincts Memory
Professional or Working memory
Relational Memory
Basic Logical Memory and
Advanced Knowledge Memory.

When a person is affected by amnesia ,he could be losing one of those above mentioned memories and still can survive with other memories being intact.

The working of brain memory can be classified into different categories as below:
1)Replace
2)Upgrade
3)Forget
4)Merge

Any day to day activity can be classified into one of the above categories.

Brain memory works like a parallel comparing machines. There is a separate thread for each smell,video,audio and sensation data we have ever experienced.
Whenever a new thing is seen ,its compared with existing video,audio,smell and sense part of the brains in parallel and if its not present already its added to the existing list.
If the data already exist then its upgraded and the final output is sent to our thought network.

Say, if one of the video or audio or smell or sensation thing is matched to new seen thing ,we may remember some forgotten old experience. Dont confuse this to deja-vu. Its something like you smell food in a restaurant and you remember an incident that happened in your college canteen long time back.

Monday, June 25, 2007

Pirate Bay introduces Image Hosting site

As most of you already know Pirate Bay has launched an "uncensored" image
hosting site,Bayimg.com !!







At the bottom of every page you see : “NO COPYRIGHT. NO LICENSE.”“When you uploaded
the picture, you wrote a removal code,” a Bayimg FAQ section states. Users (who are trying to
pull their pics) are then reminded that a removal code is “Kind of like a password. Now type it
in and noone will ever see that picture again!”

It allows people to host rar and zip archive files too upto 100mb.Hurray!! now you can hide your personal files in image format and store it freely on the web!!


I disapprove of what you say, but I will defend to the death your right to say it. ~Voltaire

Thursday, June 21, 2007

Hide doc,txt etc files in jpeg photos

Here I teach you how to hide your confidential documents,text,Excel and other small files in jpeg photo format.
Other than you, whoever clicks on the jpeg file, they see only a picture. Only you know how to open this secret and confidential file!!.
This trick works for both Winzip(which am using here) and Winrar and many "other tools" which I wont disclose here ;)



Step 1: Let say you have a photo called “a.jpg” which you use as bait to hide your secret file “confidential.doc”
(You can use txt, xls or any other small file instead of a doc file)










Step 2: Now Zip “a.jpeg” and “confidential.doc” file into “confidential.zip” file.
Ie, select a.jpg and confidential.doc and right click à add to zip file


Step 3: Now add the zip and jpeg file into another jpeg file using simple DOS command as shown below:
Copy /b a.jpg + confidential.zip final.jpg


Step 4: Now if you open final.jpg using Winzip you see both the files as shown below. If anybody else clicks on final.jpeg
They just see the original photo in an Image viewer.
Please comment if you have any doubts or queries. Also let me know if you have more interesting tricks!!

Wednesday, June 13, 2007

Assign Drive letter to a folder

Do you use a particular folder very often on your PC??Do you feel sick of going through all the navigation hierarchy to access the contents of that folder??Ok one way to circumvent it is to create a shortcut to that folder on your desktop. you just need to right click the folder :goto sendto and select "Desktop(create shortcut)".
There is one another way to do the same which I think is more geeky!!Here we use a MS-Dos command called "subst" or "SUBSTITUTE" which creates a drive letter on your MyComputer linking the desired folder.
For example you need to create a drive letter for a folder say "D:\jdk1.5.0_07\demo\jfc\Java2D"
Open Dos using "command" at the "Run" menu



Type subst f: "D:\jdk1.5.0_07\demo\jfc\Java2D" then enter
Voila a drive letter with F: is created in your Mycomputer

Click on below Image to enlarge






Monday, May 14, 2007

Happy Birthday George !!!

This summary is not available. Please click here to view the post.

Saturday, March 24, 2007

Interesting Rule of 72

I know most of you know this already(sorry to bore you guys !!) . Just writing it down for blog sake.

Anyway theres something called
Rule of 72 which helps you a lot if you worry about your saved money.
The rule says to find out how quickly any money invested will double, simply divide 72 by the rate of interest you have invested.This is providing you dont touch the money and its grows in compounding nature.

Examples
If your money is invested at 8% it will double in 9 years (72 divided by 8)
If your money is invested at 10% it will double in 7.2 years (72 divided by10)

Wednesday, February 28, 2007

Small story of Numbers !! (Interesting)

Read this in some forum and its interesting enough to put it up on my blog :)

The numbers we all use (1, 2, 3, 4, etc.) are known as "arabic" numbers to distinguish them from the "Roman Numerals" (I, II, III, IV, V, VI, etc). Actually the arabs popularized these numbers but they were originally used by the early phonecian traders to count and keep track of their trading accounts. Have you ever thought why.. '1' means "one", and '2' means "two"? The roman numerals are easy to understand but what was the logic behind the symbols of phonecian numbers? It's all about angles between the lines you draw!! It's the number of angles. If one writes the numbers down (see below) on a piece of paper in their older forms, one quickly sees why. I have marked the angles with "o"s.
No 1 has one angle.
No 2 has two angles.
No 3 has three angles.



etc...
and "O" has no angles






Tuesday, January 30, 2007

Learning Python made very easy

Going through many news forums and language communities, I was wondering whats so great about Python compared to Java. The below article satisfied some of my questions.
Python v/s Java side by side

With the advent of multi core processors,the need for a better language has rised and bar for a language to be future language has also rised a lot. So am wondering whether Java or C++ can survive this change.

Anyway whether its good or not,I have started learning Python and it seems very easy,readable and amazing. There are many resources to learn Python and one of my favourite is this online book < Byte of Python>which is very good for beginners. The chapters are well crafted and all the basic requirements and installations are covered step by step,making the journey very easy and relaxed.

I have installed ActivePython and getting started was very easy. Have run some example codes and written some basic code to make myself familiar with the language and as said above the learning curve for python is better than any other languages I have learned so far(except Cobol ;) though)
Thats it for now. May be I will post my hurdles,success (and whatever) of my journey in 2 months.

Multi core processors and software to make use of it

Ok its been long time since I touched a pen oops sorry keyboard to write something here.
Any way came across this nice article related to Multi core processors and how software industry should change for this new mantra to take its full effect.

Some points from the article are listed below:
What is Multi core computing ? I know its plain stupid but just for the sake of "others".

Over the past 30 years, CPU designers have achieved performance gains in three main areas, the first two of which focus on straight-line execution flow:

clock speed
execution optimization
cache

Increasing clock speed is about getting more cycles. Running the CPU faster more or less directly means doing the same work faster.

Optimizing execution flow is about doing more work per cycle. Today’s CPUs sport some more powerful instructions, and they perform optimizations that range from the pedestrian to the exotic, including pipelining, branch prediction, executing multiple instructions in the same clock cycle(s), and even reordering the instruction stream for out-of-order execution. These techniques are all designed to make the instructions flow better and/or execute faster, and to squeeze the most work out of each clock cycle by reducing latency and maximizing the work accomplished per clock cycle.

Finally, increasing the size of on-chip cache is about staying away from RAM. Main memory continues to be so much slower than the CPU that it makes sense to put the data closer to the processor—and you can’t get much closer than being right on the die. On-die cache sizes have soared, and today most major chip vendors will sell you CPUs that have 2MB and more of on-board L2 cache. (Of these three major historical approaches to boosting CPU performance, increasing cache is the only one that will continue in the near term.

Multicore is about running two or more actual CPUs on one chip. Some chips, including Sparc and PowerPC, have multicore versions available already. The initial Intel and AMD designs, both due in 2005, vary in their level of integration but are functionally similar. AMD’s seems to have some initial performance design advantages, such as better integration of support functions on the same die, whereas Intel’s initial entry basically just glues together two Xeons on a single die. The performance gains should initially be about the same as having a true dual-CPU system (only the system will be cheaper because the motherboard doesn’t have to have two sockets and associated “glue” chippery), which means something less than double the speed even in the ideal case, and just like today it will boost reasonably well-written multi-threaded applications. Not single-threaded ones.

What This Means For Software?

Concurrency is the next major revolution in how we write software.
we’ve been doing concurrent programming since those same dark ages, writing coroutines and monitors and similar jazzy stuff. And for the past decade or so we’ve witnessed incrementally more and more programmers writing concurrent (multi-threaded, multi-process) systems. But an actual revolution marked by a major turning point toward concurrency has been slow to materialize. Today the vast majority of applications are single-threaded.

Benefits and Costs of Concurrency

There are two major reasons for which concurrency, especially multithreading, is already used in mainstream software. The first is to logically separate naturally independent control flows; for example, in a database replication server I designed it was natural to put each replication session on its own thread, because each session worked completely independently of any others that might be active (as long as they weren’t working on the same database row). The second and less common reason to write concurrent code in the past has been for performance, either to scalably take advantage of multiple physical CPUs or to easily take advantage of latency in other parts of the application; in my database replication server, this factor applied as well and the separate threads were able to scale well on multiple CPUs as our server handled more and more concurrent replication sessions with many other servers.

There are, however, real costs to concurrency. Some of the obvious costs are actually relatively unimportant. For example, yes, locks can be expensive to acquire, but when used judiciously and properly you gain much more from the concurrent execution than you lose on the synchronization, if you can find a sensible way to parallelize the operation and minimize or eliminate shared state.
Probably the greatest cost of concurrency is that concurrency really is hard: The programming model, meaning the model in the programmer’s head that he needs to reason reliably about his program, is much harder than it is for sequential control flow.

Finally, programming languages and systems will increasingly be forced to deal well with concurrency. The Java language has included support for concurrency since its beginning, although mistakes were made that later had to be corrected over several releases in order to do concurrent programming more correctly and efficiently. The C++ language has long been used to write heavy-duty multithreaded systems well, but it has no standardized support for concurrency at all (the ISO C++ standard doesn’t even mention threads, and does so intentionally), and so typically the concurrency is of necessity accomplished by using nonportable platform-specific concurrency features and libraries. (It’s also often incomplete; for example, static variables must be initialized only once, which typically requires that the compiler wrap them with a lock, but many C++ implementations do not generate the lock.) Finally, there are a few concurrency standards, including pthreads and OpenMP, and some of these support implicit as well as explicit parallelization. Having the compiler look at your single-threaded program and automatically figure out how to parallelize it implicitly is fine and dandy, but those automatic transformation tools are limited and don’t yield nearly the gains of explicit concurrency control that you code yourself. The mainstream state of the art revolves around lock-based programming, which is subtle and hazardous. We desperately need a higher-level programming model for concurrency than languages offer today; I'll have more to say about that soon.

Conclusion

If you haven’t done so already, now is the time to take a hard look at the design of your application, determine what operations are CPU-sensitive now or are likely to become so soon, and identify how those places could benefit from concurrency. Now is also the time for you and your team to grok concurrent programming’s requirements, pitfalls, styles, and idioms.

A few rare classes of applications are naturally parallelizable, but most aren’t. Even when you know exactly where you’re CPU-bound, you may well find it difficult to figure out how to parallelize those operations; all the most reason to start thinking about it now. Implicitly parallelizing compilers can help a little, but don’t expect much; they can’t do nearly as good a job of parallelizing your sequential program as you could do by turning it into an explicitly parallel and threaded version.

Thanks to continued cache growth and probably a few more incremental straight-line control flow optimizations, the free lunch will continue a little while longer; but starting today the buffet will only be serving that one entrée and that one dessert. The filet mignon of throughput gains is still on the menu, but now it costs extra—extra development effort, extra code complexity, and extra testing effort. The good news is that for many classes of applications the extra effort will be worthwhile, because concurrency will let them fully exploit the continuing exponential gains in processor throughput.