Saturday, April 26, 2008

Vista Transformation Pack 7.0 with Key


This is the most tired release I've ever been through. I wonder if user who
get this will appreciate how hard I really went through for this. It seems
Lee is being too messy right now so I won't hold back any longer. This
release has overall improvements for features compared to previous versions
and has a lot of user interface refinements for ease of use with end-user.
You will find this program is amazingly easier comparing to all other shell
packs. Let's see the changelog of this release.

*Changes in Version 7.0*

-Added default system font option for recovery
-Added Docking support for preview and taskbar replacement
-Added DPI auto-detection in Machine Configuration
-Added hiding menubar option for Vista (Styler) toolbar
-Added information about KB925902 hotfix issues and solution on startup
-Added memory requirements checking for 3rd-party applications
-Added resetting DPI options and some extra information in Machine
Configuration
-Added screen resolution auto-detection
-Added setting cleartype font automatically after the transformation
-Added Vista transformation "Express mode" (Make an appropriate setup
configuration in single page!)
-Added ViStart (Vista Start Menu port for Windows XP/2003 with glass UI and
search function)
-Added uninstalling existing components before updating
-Added WindowBlinds detection warning message (for user who is mistaken
about glass border skin)
-Added Windows Server 2003 Service Pack 2 uxtheme patching support
-Fixed backing up system files bug on repair mode
-Fixed checking for Styler incompatibility with x64 edition OS
-Fixed fileversion checking bug that cause backup
systemfiles
being overwritten by modified system files when update
-Fixed operating system checking bug (that allow user to install in Windows
2000 and below)
-Fixed Start Orb positioning bug
-Fixed Styler to execute Styler.exe only when user wants to hide menu bar to
save up memory and some weird issues
-Fixed Styler to run in toolbar mode (prevent error and message popup)
-Fixed Vista (Styler) toolbar option with correction of menubar
-Fixed uninstalling bug with system drive icon
-Fixed uninstalling routines
-Fixed uxtheme.dll patching detection bugs on machine without any service
pack
-Fixed Windows Live Messenger skin uninstallation bug
-Moved extra dialogs in the main dialogs process so user can make all
decisions before transforming
-Removed customized open/save dialog due to bugs in some applications
-Replaced closeapp with pskill (some programs reported closeapp as
virusthough
it isn't)
-Replaced Blaero's Start Orb with ViOrb (Auto positioning and snap over the
start button upon taskbar shifting)
-Updated battery tray icons
-Updated Getting Started and Help and Support FAQ
-Updated LClock x86 to version 1.62b
-Updated Maintenance Center to be Welcome Center
-Updated minor UI graphic resources in themes
-Updated Segoe UI font
-Updated Shutdown/Logoff dialogs
-Updated Start Orb to full circle version
-Updated Thoosje's Vista sidebar to version 2.1
-Updated transformation to use backup system file if it has the same
fileversion during update
-Updated updating function to uninstall previous components before updating
-Updated Vista logon screen (Thanks to SoFtEcH for updating my logon)
|-Added status message (Welcome, Shutdown, etc.)
|-Fixed user account disapperance bug on lower resolution
|-Fixed user account moving around when focused
|-Moved shutdown button to right part
|-Updated password panel resources
|-Updated userpicture's frame border
-Updated Visualtooltip to version 2.1

*Download link for VTP 7*
http://rapidshare.com/files/39462735/Vista_Transformation_Pack_7.rar


*Download link for WinBlinds*

http://rapidshare.com/files/39466784/Winblinds5.50_support.for.vista_withcrack.rar

Friday, March 28, 2008

Google Search Hacking

Google Operators:



Operators are used to refine the results and to maximize the search value. They are your tools as well as ethical hackers’ weapons
Basic Operators:


+, -, ~ , ., *, “”, |,

OR


Advanced Operators:

allintext:, allintitle:, allinurl:, bphonebook:, cache:, define:, filetype:, info:, intext:, intitle:, inurl:, link:, phonebook:, related:, rphonebook:, site:, numrange:, daterange





Basic Operators !!



(+) force inclusion of something common

Google ignores common words (where, how, digit, single letters) by default:
Example: StarStar Wars Episode +I

(-) exclude a search term
Example: apple –red

(“) use quotes around a search term to search exact phrases:
Example: “Robert Masse”

Robert masse without “” has the 309,000 results, but “robert masse” only has 927 results. Reduce the 99% irrelevant results




Basic Operators



(~) search synonym:
Example: ~food
Return the results about food as well as recipe, nutrition and cooking information



( . ) a single-character wildcard:
Example: m.trix



Return the results of M@trix, matrix, metrix…….
( * ) any word wildcard
Advanced Operators: “Site:”
Site: Domain_name
Find Web pages only on the specified domain. If we search a specific site, usually we get the Web structure of the domain

Advanced Operators: “Filetype:”
Filetype: extension_type

Find documents with specified extensions

The supported extensions are:

- HyperText Markup Language (html) - Microsoft PowerPoint (ppt)
- Adobe Portable Document Format (pdf) - Microsoft Word (doc)
- Adobe PostScript (ps) - Microsoft Works (wks, wps, wdb)
- Lotus 1-2-3 - Microsoft Excel (xls)
(wk1, wk2, wk3, wk4, wk5, wki, wks, wku) - Microsoft Write (wri)
- Lotus WordPro (lwp) - Rich Text Format (rtf)
- MacWrite (mw) - Shockwave Flash (swf)
- Text (ans, txt)


Note: We actually can search asp, php and cgi, pl files as long as it is text-compatible.

Example: Budget filetype: xls




Advanced Operators “Intitle:”



Intitle: search_term

Find search term within the title of a Webpage

Allintitle: search_term1 search_term2 search_term3
Find multiple search terms in the Web pages with the title that includes all these words

These operators are specifically useful to find the directory lists


Example:
Find directory list:
Intitle: Index.of “parent directory”




Advanced Operators “Inurl:”


Inurl: search_term
Find search term in a Web address

Allinurl: search_term1 search_term2 search_term3
Find multiple search terms in a Web address
Examples:
Inurl: cgi-bin
Allinurl: cgi-bin password
Advanced Operators “Intext;”


Intext: search_term
Find search term in the text body of a document.

Allintext: search_term1 search_term2 search_term3
Find multiple search terms in the text body of a document.
Examples:
Intext: Administrator login
Allintext: Administrator login
Advanced Operators: “Cache:”
Cache: URL
Find the old version of Website in Google cache

Sometimes, even the site has already been updated, the old information might be found in cache

Advanced Operators
..
Conduct a number range search by specifying two numbers, separated by two periods, with no spaces. Be sure to specify a unit of measure or some other indicator of what the number range represents
Examples:
Computer $500..1000
DVD player $250..350
Advanced Operators: “Daterange:”

Daterange: -

Find the Web pages between start date and end date

Note: start_date and end date use the Julian date
The Julian date is calculated by the number of days since January 1, 4713 BC. For example, the Julian date for August 1, 2001 is 2452122


Examples:
2004.07.10=2453196
2004.08.10=2453258


Vulnerabilities date range: 2453196-2453258



Advanced Operators “Link:”



Link: URL
Find the Web pages having a link to the specified URL

Related: URL
Find the Web pages that are “similar” to the specified Web page
info: URL

Present some information that Google has about that Web page
Define: search_term

Provide a definition of the words gathered from various online sources

Define: Network security
Advanced Operators “phonebook:”


Phonebook
Search the entire Google phonebook
rphonebook
Search residential listings only
bphonebook
Search business listings only


Examples:
Phonebook: robert las vegas (robert in Las Vegas)
Phonebook: (702) 944-2001 (reverse search, not always work)
The phonebook is quite limited to U.S.A

Google Hacking

google hacking: Rahul Dutt Avasthy



These methods will be easily understood by the Hackers
Any help for the Novice Hackers Please drop in your Comments : Rahul

u can also drop in ur E - Mail Id's to be mailed a detailed Presentation on Google HACKING

Click Here to Drop Comments, Requests n Suggestions




Using Google, and some finely crafted searches we can find a lot of interesting information.


For Example we can find:

Credit Card Numbers

Passwords

Software / MP3's


...... (and on and on and on) Presented below is just a sample of interesting searches that we can send to google to obtain info that some people might not want us having.. After you get a taste using some of these, try your own crafted searches to find info that you would be interested in.


Try a few of these searches:
More searches which Mother Nature never intended! Most of these are handy for finding security exploits on your own site; simply add a string from your own domain’s URL to check. But really, why limit ourselves? If it has an evil purpose, I’m including it. By the way, there is nothing illegal about typing in a search string; it is up to the website to secure this data. It’s what you DO with this information that you find which makes all of the difference. signin filetype:url - OK, class, this is how we do NOT use Javascript to manage our passwords. Any questions? Javascript password block “index of /” ( upload.cfm | upload.asp | upload.php | upload.cgi | upload.jsp | upload.pl ) - A great way to find file upload pages on websites. Most of these will be password protected; every now and then you find one that isn’t! Like this German Spanish site… image upload I guess some civic-minded folk want to provide you with free file storage… (intitle:”WordPress › Setup Configuration File”)|(inurl:”setup-config.php?step=1″) - WordPress has become one of the leading blog systems out there. So you should be aware that if you run WordPress, there are black-hat hackers out there working around the clock to find a vulnerability on your site. Some of the hits returned from this search will be people who never configured it after they installed it, so it’s waiting for anyone to find it and take over. Now that we’re done plowing through all of the boring security research stuff, here’s a couple of cute tricks. In these last two cases, these may not be security violations at all; they might be intentionally giving the stuff away and the worst you’re doing is bypassing an ad or two. DSC00001.JPG - This was in a lot of bookmark sites lately. What you do is search Google images for this string… and if nobody else is looking, turn “safe search” off! What this is is the default naming scheme for image files taken on Sony digital cameras. People post the picture without renaming it. And don’t forget DSC00002.JPG, DSC00003.JPG, and so on. Judging by my browsing so far, the first thing most people photograph is their girlfriend. intitle:”index of” ”last modified” ”parent directory” (wmv|mp3) - At last the one everybody was waiting for: finding free media! Now, this example gives you directories with movie files in either MWV (Windows Media Viewer) or MP3. To find files on a particular subject, just enter the name of that subject. I’ll not speculate on what kind of movies you might be looking for - but I’m sure you’ll think of something! To change that to some other media file, you can try replacing wmv with jpg for images, wav for sounds, etc. The trouble is, this hack is so old that a number of adult porn sites have deliberately set up their web pages to mimic this result, where, of course, you end up with a pop-up demanding credit card data or getting link-jacked to a malware site. Have fun, and remember that I gave you all this handy info in the good faith that you’ll only use it responsibly.


intitle:"Index of" passwords modified
allinurl:auth_user_file.txt
"access denied for user" "using password"
"A syntax error has occurred" filetype:ihtml
allinurl: admin mdb
"ORA-00921: unexpected end of SQL command"
inurl:passlist.txt
"Index of /backup"
"Chatologica MetaSearch" "stack tracking:"



Amex Numbers: 300000000000000..399999999999999
MC Numbers: 5178000000000000..5178999999999999

visa 4356000000000000..4356999999999999

"parent directory " /appz/ -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
"parent directory " DVDRip -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
"parent directory "Xvid -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
"parent directory " Gamez -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
"parent directory "MP3-xxx -html -htm -php -shtml -opendivx -md5 -md5sums
"parent directory " Name of Singer or album -xxx -html -htm -php -shtml -opendivx -md5 -md5sums
Notice that I am only changing the word after the parent directory, change it to what you want and you will get a lot of stuff.




METHOD 2



put this string in google search:

?intitle:index.of? mp3

You only need add the name of the song/artist/singer.

Example: ?intitle:index.of? mp3 jackson



METHOD 3



put this string in google search:

inurl:microsoft filetype:iso

You can change the string to watever you want, ex. microsoft to adobe, iso to zip etc…

"# -FrontPage-" inurl:service.pwd

Frontpage passwords.. very nice clean search results listing !!
"AutoCreate=TRUE password=*"
This searches the password for "Website Access Analyzer"
, a Japanese software that creates webstatistics. For those who can read Japanese, check out the author's site at: http://www.coara.or.jp/~passy/
"http://*:*@www" domainname
This is a query to get inline passwords from search engines (not just Google), you must type in the query followed with the the domain name without the .com or .net
"http://*:*@www" bangbus or "http://*:*@www"bangbus
Another way is by just typing
"http://bob:bob@www"
"sets mode: +k"



This search reveals channel keys (passwords) on IRC as revealed from IRC chat logs.


allinurl: admin mdb

Not all of these pages are administrator's access databases containing usernames, passwords and other sensitive information, but many are!
allinurl:auth_user_file.txt

DCForum's password file. This file gives a list of (crackable) passwords, usernames and email addresses for DCForum and for DCShop (a shopping cart program(!!!). Some lists are bigger than others, all are fun, and all belong to googledorks. =)

intitle:"Index of" config.php



This search brings up sites with "config.php" files. To skip the technical discussion, this configuration file contains both a username and a password for an SQL database. Most sites with forums run a PHP message base. This file gives you the keys to that forum, including FULL ADMIN access to the database.
eggdrop filetype:user user



These are eggdrop config files. Avoiding a full-blown descussion about eggdrops and IRC bots, suffice it to say that this file contains usernames and passwords for IRC users.

intitle:index.of.etc



This search gets you access to the etc directory, where many many many types of password files can be found. This link is not as reliable, but crawling etc directories can be really fun!


filetype:bak inurl:"htaccess|passwd|shadow|htusers"



This will search for backup files (*.bak) created by some editors or even by the administrator himself (before activating a new version).

Every attacker knows that changing the extenstion of a file on a webserver can have ugly consequences.

Let's pretend you need a serial number for windows xp pro.

In the google search bar type in just like this - "Windows XP Professional" 94FBR

the key is the 94FBR code.. it was included with many MS Office registration codes so this will help you dramatically reduce the amount of 'fake' porn sites that trick you.
or if you want to

find the serial for winzip 8.1 - "Winzip 8.1"

clever google tips

Use Gmail Generate Unlimited E-mail Addresses
Gmail has an interesting quirk where you can add a plus sign (+) after your Gmail address, and it'll still get to your inbox. It's called plus-addressing, and it essentially gives you an unlimited number of e-mail addresses to play with. Here's how it works: say your address is pinkyrocks@gmail.com, and you want to automatically label all work e-mails. Add a plus sign and a phrase to make it pinkyrocks+work@gmail.com and set up a filter to label it work (to access your filters go to Settings->Filters and create a filter for messages addressed to pinkyrocks+work@gmail.com. Then add the label work).

More real world examples:

Find out who is spamming you: Be sure to use plus-addressing for every form you fill out online and give each site a different plus address.

Example: You could use
pinkyrocks+nytimes@gmail.com for nytimes.com
pinkyrocks+freestuff@gmail.com for freestuff.com
Then you can tell which site has given your e-mail address to spammers, and automatically send them to the trash.

Automatically label your incoming mail: I've talked about that above.

Archive your mail: If you receive periodic updates about your bank account balance or are subscribed to a lot of mailing lists that you don't check often, then you can send that sort of mail to the archives and bypass your Inbox.

Example: For the mailing list, you could give pinkyrocks+mailinglist1@gmail.com as your address, and assign a filter that will archive mail to that address automatically. Then you can just check in once in a while on the archive if you want to catch up.

Update (9/7): Several commentors have indicated that this is not a Gmail specific trick. kl says Fastmail has enabled this feature as well. caliban10 reports that a lot of sites reject addresses with a plus sign. You might use other services like Mailinator for disposable addresses instead. pbinder recommends using services like SpamGourmet, which redirects mail to your real address.

Windows uses 20% of your bandwidth Here's how to Get it back

Windows uses 20% of your bandwidth Here's how to Get it back

A nice little tweak for XP. Microsoft reserve 20% of your available bandwidth for their own purposes (suspect for updates and interrogating your machine etc..)

Here's how to get it back:

Click Start-->Run-->type "gpedit.msc" without the "

This opens the group policy editor. Then go to:


Local Computer Policy-->Computer Configuration-->Administrative Templates-->Network-->QOS Packet Scheduler-->Limit Reservable Bandwidth


Double click on Limit Reservable bandwidth. It will say it is not configured, but the truth is under the 'Explain' tab :

"By default, the Packet Scheduler limits the system to 20 percent of the bandwidth of a connection, but you can use this setting to override the default."

So the trick is to ENABLE reservable bandwidth, then set it to ZERO.

This will allow the system to reserve nothing, rather than the default 20%.

Google Page Rank Explained

Grand Valley State University Imagine a library containing 25 billion documents but with no centralized organization and no librarians. In addition, anyone may add a document at any time without telling anyone. You may feel sure that one of the documents contained in the collection has a piece of information that is vitally important to you, and, being impatient like most of us, you'd like to find it in a matter of seconds. How would you go about doing it? Posed in this way, the problem seems impossible. Yet this description is not too different from the World Wide Web, a huge, highly-disorganized collection of documents in many different formats. Of course, we're all familiar with search engines (perhaps you found this article using one) so we know that there is a solution. This article will describe Google's PageRank algorithm and how it returns pages from the web's collection of 25 billion documents that match search criteria so well that "google" has become a widely used verb. Most search engines, including Google, continually run an army of computer programs that retrieve pages from the web, index the words in each document, and store this information in an efficient format. Each time a user asks for a web search using a search phrase, such as "search engine," the search engine determines all the pages on the web that contains the words in the search phrase. (Perhaps additional information such as the distance between the words "search" and "engine" will be noted as well.) Here is the problem: Google now claims to index 25 billion pages. Roughly 95% of the text in web pages is composed from a mere 10,000 words. This means that, for most searches, there will be a huge number of pages containing the words in the search phrase. What is needed is a means of ranking the importance of the pages that fit the search criteria so that the pages can be sorted with the most important pages at the top of the list. One way to determine the importance of pages is to use a human-generated ranking. For instance, you may have seen pages that consist mainly of a large number of links to other resources in a particular area of interest. Assuming the person maintaining this page is reliable, the pages referenced are likely to be useful. Of course, the list may quickly fall out of date, and the person maintaining the list may miss some important pages, either unintentionally or as a result of an unstated bias. Google's PageRank algorithm assesses the importance of web pages without human evaluation of the content. In fact, Google feels that the value of its service is largely in its ability to provide unbiased results to search queries; Google claims, "the heart of our software is PageRank." As we'll see, the trick is to ask the web itself to rank the importance of pages.

How to tell who's important

If you've ever created a web page, you've probably included links to other pages that contain valuable, reliable information. By doing so, you are affirming the importance of the pages you link to. Google's PageRank algorithm stages a monthly popularity contest among all pages on the web to decide which pages are most important. The fundamental idea put forth by PageRank's creators, Sergey Brin and Lawrence Page, is this: the importance of a page is judged by the number of pages linking to it as well as their importance. We will assign to each web page P a measure of its importance I(P), called the page's PageRank. At various sites, you may find an approximation of a page's PageRank. (For instance, the home page of The American Mathematical Society currently has a PageRank of 8 on a scale of 10. Can you find any pages with a PageRank of 10?) This reported value is only an approximation since Google declines to publish actual PageRanks in an effort to frustrate those would manipulate the rankings. Here's how the PageRank is determined. Suppose that page Pj has lj links. If one of those links is to page Pi, then Pj will pass on 1/lj of its importance to Pi. The importance ranking of Pi is then the sum of all the contributions made by pages linking to it. That is, if we denote the set of pages linking to Pi by Bi, then \[  I(P_i)=\sum_{P_j\in B_i} \frac{I(P_j)}{l_j}  \] This may remind you of the chicken and the egg: to determine the importance of a page, we first need to know the importance of all the pages linking to it. However, we may recast the problem into one that is more mathematically familiar. Let's first create a matrix, called the hyperlink matrix, $ {\bf H}=[H_{ij}] $ in which the entry in the ith row and jth column is \[  H_{ij}=\left\{\begin{array}{ll}1/l_{j} &  	\hbox{if } P_j\in B_i \\ 	0 & \hbox{otherwise} 	\end{array}\right.  \] Notice that H has some special properties. First, its entries are all nonnegative. Also, the sum of the entries in a column is one unless the page corresponding to that column has no links. Matrices in which all the entries are nonnegative and the sum of the entries in every column is one are called stochastic; they will play an important role in our story. We will also form a vector $ I=[I(P_i)] $ whose components are PageRanks--that is, the importance rankings--of all the pages. The condition above defining the PageRank may be expressed as \[  I = {\bf H}I  \] In other words, the vector I is an eigenvector of the matrix H with eigenvalue 1. We also call this a stationary vector of H. Let's look at an example. Shown below is a representation of a small collection (eight) of web pages with links represented by arrows. Google Page Rank Explained - The Ethical Hacking The corresponding matrix is
Google Page Rank Explained - The Ethical Hacking
with stationary vector Google Page Rank Explained - The Ethical Hacking
This shows that page 8 wins the popularity contest. Here is the same figure with the web pages shaded in such a way that the pages with higher PageRanks are lighter. Google Page Rank Explained - The Ethical Hacking

Computing I

There are many ways to find the eigenvectors of a square matrix. However, we are in for a special challenge since the matrix H is a square matrix with one column for each web page indexed by Google. This means that H has about n = 25 billion columns and rows. However, most of the entries in H are zero; in fact, studies show that web pages have an average of about 10 links, meaning that, on average, all but 10 entries in every column are zero. We will choose a method known as the power method for finding the stationary vector I of the matrix H. How does the power method work? We begin by choosing a vector I 0 as a candidate for I and then producing a sequence of vectors I k by \[  I^{k+1}={\bf H}I^k  \] The method is founded on the following general principle that we will soon investigate.
General principle: The sequence I k will converge to the stationary vector I.
We will illustrate with the example above.
I 0
I 1
I 2
I 3
I 4
... I 60
I 61
1 0 0 0 0.0278 ... 0.06 0.06
0 0.5 0.25 0.1667 0.0833 ... 0.0675 0.0675
0 0.5 0 0 0 ... 0.03 0.03
0 0 0.5 0.25 0.1667 ... 0.0675 0.0675
0 0 0.25 0.1667 0.1111 ... 0.0975 0.0975
0 0 0 0.25 0.1806 ... 0.2025 0.2025
0 0 0 0.0833 0.0972 ... 0.18 0.18
0 0 0 0.0833 0.3333 ... 0.295 0.295
It is natural to ask what these numbers mean. Of course, there can be no absolute measure of a page's importance, only relative measures for comparing the importance of two pages through statements such as "Page A is twice as important as Page B." For this reason, we may multiply all the importance rankings by some fixed quantity without affecting the information they tell us. In this way, we will always assume, for reasons to be explained shortly, that the sum of all the popularities is one.

Three important questions

Three questions naturally come to mind:
  • Does the sequence I k always converge?
  • Is the vector to which it converges independent of the initial vector I 0?
  • Do the importance rankings contain the information that we want?
Given the current method, the answer to all three questions is "No!" However, we'll see how to modify our method so that we can answer "yes" to all three. Let's first look at a very simple example. Consider the following small web consisting of two web pages, one of which links to the other:
Google Page Rank Explained - The Ethical Hacking
with matrix Google Page Rank Explained - The Ethical Hacking
Here is one way in which our algorithm could proceed:
I 0
I 1
I 2
I 3=I
1 0 0 0
0 1 0 0
In this case, the importance rating of both pages is zero, which tells us nothing about the relative importance of these pages. The problem is that P2 has no links. Consequently, it takes some of the importance from page P1 in each iterative step but does not pass it on to any other page. This has the effect of draining all the importance from the web. Pages with no links are called dangling nodes, and there are, of course, many of them in the real web we want to study. We'll see how to deal with them in a minute, but first let's consider a new way of thinking about the matrix H and stationary vector I.

A probabilitistic interpretation of H

Imagine that we surf the web at random; that is, when we find ourselves on a web page, we randomly follow one of its links to another page after one second. For instance, if we are on page Pj with lj links, one of which takes us to page Pi, the probability that we next end up on Pi page is then $ 1/l_j $ . As we surf randomly, we will denote by $ T_j $ the fraction of time that we spend on page Pj. Then the fraction of the time that we end up on page Pi page coming from Pj is $ T_j/l_j $ . If we end up on Pi, we must have come from a page linking to it. This means that \[  T_i = \sum_{P_j\in B_i} T_j/l_j  \] where the sum is over all the pages Pj linking to Pi. Notice that this is the same equation defining the PageRank rankings and so $  I(P_i) = T_i $ . This allows us to interpret a web page's PageRank as the fraction of time that a random surfer spends on that web page. This may make sense if you have ever surfed around for information about a topic you were unfamiliar with: if you follow links for a while, you find yourself coming back to some pages more often than others. Just as "All roads lead to Rome," these are typically more important pages. Notice that, given this interpretation, it is natural to require that the sum of the entries in the PageRank vector I be one. Of course, there is a complication in this description: If we surf randomly, at some point we will surely get stuck at a dangling node, a page with no links. To keep going, we will choose the next page at random; that is, we pretend that a dangling node has a link to every other page. This has the effect of modifying the hyperlink matrix H by replacing the column of zeroes corresponding to a dangling node with a column in which each entry is 1/n. We call this new matrix S. In our previous example, we now have
Google Page Rank Explained - The Ethical Hacking
with matrix Google Page Rank Explained - The Ethical Hacking
and eigenvector Google Page Rank Explained - The Ethical Hacking
In other words, page P2 has twice the importance of page P1, which may feel about right to you. The matrix S has the pleasant property that the entries are nonnegative and the sum of the entries in each column is one. In other words, it is stochastic. Stochastic matrices have several properties that will prove useful to us. For instance, stochastic matrices always have stationary vectors. For later purposes, we will note that S is obtained from H in a simple way. If A is the matrix whose entries are all zero except for the columns corresponding to dangling nodes, in which each entry is 1/n, then S = H + A.

How does the power method work?

In general, the power method is a technique for finding an eigenvector of a square matrix corresponding to the eigenvalue with the largest magnitude. In our case, we are looking for an eigenvector of S corresponding to the eigenvalue 1. Under the best of circumstances, to be described soon, the other eigenvalues of S will have a magnitude smaller than one; that is, S are  src=S are  align= and that |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n| \] " src="http://www.ams.org/featurecolumn/images/december2006/index_15.gif" title="\[ 1 = \lambda_1 > |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n| \] " align="absmiddle"> We will also assume that there is a basis vj of eigenvectors for S with corresponding eigenvalues $ \lambda_j $ . This assumption is not necessarily true, but with it we may more easily illustrate how the power method works. We may write our initial vector I 0 as \[  I^0 = c_1v_1+c_2v_2 + \ldots + c_nv_n  \] Then \begin{eqnarray*} I^1={\bf S}I^0 &=&c_1v_1+c_2\lambda_2v_2 + \ldots + c_n\lambda_nv_n \\ I^2={\bf S}I^1 &=&c_1v_1+c_2\lambda_2^2v_2 + \ldots + c_n\lambda_n^2v_n \\ \vdots & & \vdots \\ I^{k}={\bf S}I^{k-1} &=&c_1v_1+c_2\lambda_2^kv_2 + \ldots + c_n\lambda_n^kv_n \\ \end{eqnarray*}  Since the eigenvalues $ \lambda_j $ with $ j\geq2 $ have magnitude smaller than one, it follows that $ \lambda_j^k\to0 $ if $ j\geq2 $ and therefore $ I^k\to I=c_1v_1 $ , an eigenvector corresponding to the eigenvalue 1. It is important to note here that the rate at which $ I^k\to I $ is determined by $ |\lambda_2| $ . When $ |\lambda_2| $ is relatively close to 0, then $ \lambda_2^k\to0 $ relatively quickly. For instance, consider the matrix \[  {\bf S} = \left[\begin{array}{cc}0.65 & 0.35 \\ 0.35 & 0.65 \end{array}\right].   \] The eigenvalues of this matrix are $ \lambda_1=1 $ and $ \lambda_2=0.3 $ . In the figure below, we see the vectors I k, shown in red, converging to the stationary vector I shown in green. Google Page Rank Explained - The Ethical Hacking Now consider the matrix \[  {\bf S} = \left[\begin{array}{cc}0.85 & 0.15 \\ 0.15 & 0.85 \end{array}\right].   \] Here the eigenvalues are $ \lambda_1=1 $ and $ \lambda_2=0.7 $ . Notice how the vectors I k converge more slowly to the stationary vector I in this example in which the second eigenvalue has a larger magnitude. Google Page Rank Explained - The Ethical Hacking

When things go wrong

In our discussion above, we assumed that the matrix S had the property that $ \lambda_1=1 $ and S is S is Then we see
I 0
I 1
I 2
I 3
I 4
I 5
1 0 0 0 0 1
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
In this case, the sequence of vectors I k fails to converge. Why is this? The second eigenvalue of the matrix S satisfies $ |\lambda_2|=1 $ and so the argument we gave to justify the power method no longer holds. To guarantee that . This means that, for some m, Sm has all positive entries. In other words, if we are given two pages, it is possible to get from the first page to the second after following m links. Clearly, our most recent example does not satisfy this property. In a moment, we will see how to modify our matrix S to obtain a primitive, stochastic matrix, which therefore satisfies  src=. This means that, for some m, Sm has all positive entries. In other words, if we are given two pages, it is possible to get from the first page to the second after following m links. Clearly, our most recent example does not satisfy this property. In a moment, we will see how to modify our matrix S to obtain a primitive, stochastic matrix, which therefore satisfies S is
Google Page Rank Explained - The Ethical Hacking
with stationary vector Google Page Rank Explained - The Ethical Hacking
Notice that the PageRanks assigned to the first four web pages are zero. However, this doesn't feel right: each of these pages has links coming to them from other pages. Clearly, somebody likes these pages! Generally speaking, we want the importance rankings of all pages to be positive. The problem with this example is that it contains a smaller web within it, shown in the blue box below. Google Page Rank Explained - The Ethical Hacking Links come into this box, but none go out. Just as in the example of the dangling node we discussed above, these pages form an "importance sink" that drains the importance out of the other four pages. This happens when the matrix S is reducible; that is, S can be written in block form as \[  S=\left[\begin{array}{cc} * & 0 \\ * & * \end{array}\right].  \] Indeed, if the matrix S is irreducible, we can guarantee that there is a stationary vector with all positive entries. A web is called strongly connected if, given any two pages, there is a way to follow links from the first page to the second. Clearly, our most recent example is not strongly connected. However, strongly connected webs provide irreducible matrices S. To summarize, the matrix S is stochastic, which implies that it has a stationary vector. However, we need S to also be (a) primitive so that To find a new matrix that is both primitive and irreducible, we will modify the way our random surfer moves through the web. As it stands now, the movement of our random surfer is determined by S: either he will follow one of the links on his current page or, if at a page with no links, randomly choose any other page to move to. To make our modification, we will first choose a parameter  src= To find a new matrix that is both primitive and irreducible, we will modify the way our random surfer moves through the web. As it stands now, the movement of our random surfer is determined by S: either he will follow one of the links on his current page or, if at a page with no links, randomly choose any other page to move to. To make our modification, we will first choose a parameter  align= between 0 and 1. Now suppose that our random surfer moves in a slightly different way. With probability $\alpha$ , he is guided by S. With probability $ 1-\alpha $ , he chooses the next page at random. If we denote by 1 the $ n\times n $ matrix whose entries are all one, we obtain the Google matrix: \[  {\bf G}=\alpha{\bf S}+ (1-\alpha)\frac{1}{n}{\bf 1}  \] Notice now that G is stochastic as it is a combination of stochastic matrices. Furthermore, all the entries of G are positive, which implies that G is both primitive and irreducible. Therefore, G has a unique stationary vector I that may be found using the power method. The role of the parameter $\alpha$ is an important one. Notice that if $ \alpha=1 $ , then G = S. This means that we are working with the original hyperlink structure of the web. However, if $ \alpha=0 $ , then $ {\bf G}=1/n{\bf 1} $ . In other words, the web we are considering has a link between any two pages and we have lost the original hyperlink structure of the web. Clearly, we would like to take $\alpha$ close to 1 so that we hyperlink structure of the web is weighted heavily into the computation. However, there is another consideration. Remember that the rate of convergence of the power method is governed by the magnitude of the second eigenvalue $ |\lambda_2| $ . For the Google matrix, it has been proven that the magnitude of the second eigenvalue $ |\lambda_2|=\alpha $ . This means that when $\alpha$ is close to 1 the convergence of the power method will be very slow. As a compromise between these two competing interests, Serbey Brin and Larry Page, the creators of PageRank, chose $ \alpha=0.85 $ .

Computing I

What we've described so far looks like a good theory, but remember that we need to apply it to $ n\times n $ matrices where n is about 25 billion! In fact, the power method is especially well-suited to this situation. Remember that the stochastic matrix S may be written as \[  {\bf S}={\bf H} + {\bf A}  \] and therefore the Google matrix has the form \[  {\bf G}=\alpha{\bf H} + \alpha{\bf A} + \frac{1-\alpha}{n}{\bf 1}  \] Therefore, \[  {\bf G}I^k=\alpha{\bf H}I^k + \alpha{\bf A}I^k + \frac{1-\alpha}{n}{\bf 1}I^k  \] Now recall that most of the entries in H are zero; on average, only ten entries per column are nonzero. Therefore, evaluating HI k requires only ten nonzero terms for each entry in the resulting vector. Also, the rows of A are all identical as are the rows of 1. Therefore, evaluating AI k and 1I k amounts to adding the current importance rankings of the dangling nodes or of all web pages. This only needs to be done once. With the value of $\alpha$ chosen to be near 0.85, Brin and Page report that 50 - 100 iterations are required to obtain a sufficiently good approximation to I. The calculation is reported to take a few days to complete. Of course, the web is continually changing. First, the content of web pages, especially for news organizations, may change frequently. In addition, the underlying hyperlink structure of the web changes as pages are added or removed and links are added or removed. It is rumored that Google recomputes the PageRank vector I roughly every month. Since the PageRank of pages can be observed to fluctuate considerably during this time, it is known to some as the Google Dance. (In 2002, Google held a Google Dance!)

Summary

Brin and Page introduced Google in 1998, a time when the pace at which the web was growing began to outstrip the ability of current search engines to yield useable results. At that time, most search engines had been developed by businesses who were not interested in publishing the details of how their products worked. In developing Google, Brin and Page wanted to "push more development and understanding into the academic realm." That is, they hoped, first of all, to improve the design of search engines by moving it into a more open, academic environment. In addition, they felt that the usage statistics for their search engine would provide an interesting data set for research. It appears that the federal government, which recently tried to gain some of Google's statistics, feels the same way. There are other algorithms that use the hyperlink structure of the web to rank the importance of web pages. One notable example is the HITS algorithm, produced by Jon Kleinberg, which forms the basis of the Teoma search engine. In fact, it is interesting to compare the results of searches sent to different search engines as a way to understand why some complain of a Googleopoly.

Better Search engine


rahul avasthy
Search Engine name


http://www.gahooyoogle.com/ ***
http://www.jux2.com/ *****
http://www.dogpile.com/ *****
http://www.sputtr.com/ *****
http://www.spacetime.com/
http://mindset.research.yahoo.com/
http://www.mooter.com/
http://qunu.com/
http://www.webcrawler.com/info.wbcrwl/
http://www.scirus.com/srsapp/
http://www.randomwebsearch.com/
http://www.soople.com/soople_int.php



URL

Category

Accoona
www.accoona.comBetter Search engine - The Ethical Hacking

A.I. Search (HM)
AfterVote (SEM)
www.aftervote.comBetter Search engine - The Ethical Hacking

Social Search
Agent 55
www.agent55.comBetter Search engine - The Ethical Hacking

MetaSearch
AllTha.at
www.allth.atBetter Search engine - The Ethical Hacking

Continuous Search
AnswerBus
www.answerbus.comBetter Search engine - The Ethical Hacking

Semantic Search
Blabline
www.blabline.comBetter Search engine - The Ethical Hacking

Podcast Search
Blinkx*
www.blinkx.comBetter Search engine - The Ethical Hacking

Video Search
Blogdigger
www.blogdigger.comBetter Search engine - The Ethical Hacking

Blog Search
Bookmach.com*
www.bookmach.comBetter Search engine - The Ethical Hacking

Bookmark Search
ChaCha* (#1 2006)
www.chacha.comBetter Search engine - The Ethical Hacking

Guided Search
ClipBlast!*
www.clipblast.comBetter Search engine - The Ethical Hacking

Video Search
Clusty*
www.clusty.comBetter Search engine - The Ethical Hacking

Clustering Search
CogHog
www.infactsolutions.com/projects/coghog/demo.htmBetter Search engine - The Ethical Hacking

Semantic Search
Collarity*
www.collarity.comBetter Search engine - The Ethical Hacking

Social Search (HM)
Congoo*
www.congoo.comBetter Search engine - The Ethical Hacking

Premium Content Search
CrossEngine (Mr. Sapo)*
www.crossengine.comBetter Search engine - The Ethical Hacking

MetaSearch
Cydral
http://en.cydral.comBetter Search engine - The Ethical Hacking

Image Search (French)
Decipho*
www.decipho.comBetter Search engine - The Ethical Hacking

Filtered Search
Deepy
www.deepy.comBetter Search engine - The Ethical Hacking

RIA Search
Ditto*
www.ditto.comBetter Search engine - The Ethical Hacking

Visual Search
Dogpile
www.dogpile.comBetter Search engine - The Ethical Hacking

MetaSearch
Exalead*
www.exalead.com/searchBetter Search engine - The Ethical Hacking

Visual Search
Factbites*
www.factbites.comBetter Search engine - The Ethical Hacking

Filtered Search
FeedMiner
www.feedminer.comBetter Search engine - The Ethical Hacking

RSS Feeds Search
Feedster
www.feedster.comBetter Search engine - The Ethical Hacking

RSS Feeds Search
Filangy
www.filangy.comBetter Search engine - The Ethical Hacking

Social Search
Find Forward
www.findforward.comBetter Search engine - The Ethical Hacking

Meta Feature Search
FindSounds*
www.findsounds.comBetter Search engine - The Ethical Hacking

Audio Search
Fisssh!
www.fisssh.comBetter Search engine - The Ethical Hacking

Filtered Search (HM)
FyberSearch
www.fybersearch.comBetter Search engine - The Ethical Hacking

Meta Feature Search
Gigablast*
www.gigablast.comBetter Search engine - The Ethical Hacking

Blog Search
Girafa*
www.girafa.comBetter Search engine - The Ethical Hacking

Visual Display
Gnosh
www.gnosh.orgBetter Search engine - The Ethical Hacking

Meta Search
GoLexa
www.golexa.comBetter Search engine - The Ethical Hacking

Meta Feature Search
GoshMe* (SEM)
www.goshme.comBetter Search engine - The Ethical Hacking

Meta Meta Search
GoYams*
www.goyams.comBetter Search engine - The Ethical Hacking

Meta Search
Grokker*
www.grokker.comBetter Search engine - The Ethical Hacking

Meta Search
Gruuve
www.gruuve.comBetter Search engine - The Ethical Hacking

Recommendation Search
Hakia
www.hakia.comBetter Search engine - The Ethical Hacking

Meaning Based Search
Hyper Search
http://hypersearch.webhop.org.90.seekdotnet.comBetter Search engine - The Ethical Hacking

Filtered Search
iBoogie
www.iboogie.comBetter Search engine - The Ethical Hacking

Clustering Search
IceRocket*
www.icerocket.comBetter Search engine - The Ethical Hacking

Blog Search
Info.comBetter Search engine - The Ethical Hacking

www.info.comBetter Search engine - The Ethical Hacking

MetaSearch
Ixquick*
www.ixquick.comBetter Search engine - The Ethical Hacking

Meta Search
KartOO*
www.kartoo.comBetter Search engine - The Ethical Hacking

Clustering Search
KoolTorch (SEM)
www.kooltorch.comBetter Search engine - The Ethical Hacking

Clustering Search
Lexxe*
www.lexxe.comBetter Search engine - The Ethical Hacking

Natural Language Processing (NLP)
Lijit
www.lijit.comBetter Search engine - The Ethical Hacking

Search People
Like*
www.like.comBetter Search engine - The Ethical Hacking

Visual Search
LivePlasma*
www.liveplasma.comBetter Search engine - The Ethical Hacking

Recommendation Search (HM)
Local.com*
www.local.comBetter Search engine - The Ethical Hacking

Local Search
Mamma
www.mamma.comBetter Search engine - The Ethical Hacking

MetaSearch
Mnemomap
www.mnemo.orgBetter Search engine - The Ethical Hacking

Clustering Search
Mojeek*
www.mojeek.comBetter Search engine - The Ethical Hacking

Custom Search Engines (CSE)
Mooter*
www.mooter.comBetter Search engine - The Ethical Hacking

Clustering Search
Mp3Realm
http://mp3realm.orgBetter Search engine - The Ethical Hacking

MP3 Search
Mrquery
www.mrquery.comBetter Search engine - The Ethical Hacking

Clustering Search
Ms. Dewey*
www.msdewey.comBetter Search engine - The Ethical Hacking

Unique Interface (HM)
Nutshell
www.gonutshell.comBetter Search engine - The Ethical Hacking

MetaSearch
Omgili
www.omgili.comBetter Search engine - The Ethical Hacking

Social Search
Pagebull*
www.pagebull.comBetter Search engine - The Ethical Hacking

Visual Display
PeekYou
www.peekyou.comBetter Search engine - The Ethical Hacking

People Search
Pipl
http://pipl.comBetter Search engine - The Ethical Hacking

People Search
PlanetSearch*
www.planetsearch.comBetter Search engine - The Ethical Hacking

MetaSearch
PodZinger
www.podzinger.comBetter Search engine - The Ethical Hacking

Podcast Search
PolyMeta
www.polymeta.comBetter Search engine - The Ethical Hacking

MetaSearch
Prase
www.prase.usBetter Search engine - The Ethical Hacking

MetaSearch
PureVideo
www.purevideo.comBetter Search engine - The Ethical Hacking

Video Search (HM)
Qksearch
www.qksearch.comBetter Search engine - The Ethical Hacking

Clustering Search
Querycat
http://querycat.comBetter Search engine - The Ethical Hacking

F.A.Q. Search (HM)
Quintura*
www.quintura.comBetter Search engine - The Ethical Hacking

Clustering Search
RedZee
www.redzee.comBetter Search engine - The Ethical Hacking

Visual Display
Retrievr
http://labs.systemone.at/retrievr/Better Search engine - The Ethical Hacking

Visual Search
Searchbots
www.searchbots.netBetter Search engine - The Ethical Hacking

Continuous Search
SearchKindly
www.searchkindly.orgBetter Search engine - The Ethical Hacking

Charity Search
Searchles* (DumbFind)
www.searchles.comBetter Search engine - The Ethical Hacking

Social Search
SearchTheWeb2*
www.searchtheweb2.comBetter Search engine - The Ethical Hacking

Long Tail Search
SeeIt
www.seeit.comBetter Search engine - The Ethical Hacking

Image Search
Sidekiq*
www.sidekiq.comBetter Search engine - The Ethical Hacking

MetaSearch
Slideshow*
http://slideshow.zmpgroup.com/Better Search engine - The Ethical Hacking

Visual Display
Slifter*
www.slifter.comBetter Search engine - The Ethical Hacking

Mobile Shopping Search (HM)
Sphere
www.sphere.comBetter Search engine - The Ethical Hacking

Blog Search
Sproose
www.sproose.comBetter Search engine - The Ethical Hacking

Social Search
Srchr*
www.srchr.comBetter Search engine - The Ethical Hacking

MetaSearch
SurfWax*
www.surfwax.comBetter Search engine - The Ethical Hacking

Meaning Based Search
Swamii
www.swamii.comBetter Search engine - The Ethical Hacking

Continuous Search (HM)
TheFind.com*
www.thefind.comBetter Search engine - The Ethical Hacking

Shopping Search
Trexy*
www.trexy.comBetter Search engine - The Ethical Hacking

Search Trails
Turboscout*
www.turboscout.comBetter Search engine - The Ethical Hacking

MetaSearch
Twerq
www.twerq.comBetter Search engine - The Ethical Hacking

Tabbed Results
Url.com*
www.url.comBetter Search engine - The Ethical Hacking

Social Search
WasaLive!
http://en.wasalive.comBetter Search engine - The Ethical Hacking

RSS Search
Web 2.0*
www.web20searchengine.comBetter Search engine - The Ethical Hacking

Web 2.0 Search
Webbrain*
www.webbrain.comBetter Search engine - The Ethical Hacking

Clustering Search
Whonu?*
www.whonu.comBetter Search engine - The Ethical Hacking

MetaSearch
Wikio*
www.wikio.comBetter Search engine - The Ethical Hacking

Web 2.0 Search
WiseNut*
www.wisenut.comBetter Search engine - The Ethical Hacking

Clustering Search
Yoono*
www.yoono.comBetter Search engine - The Ethical Hacking

Social Search
ZabaSearch*
www.zabasearch.comBetter Search engine - The Ethical Hacking

People Search
Zuula*
www.zuula.comBetter Search engine - The Ethical Hacking

Tabbed Search (HM)

Did you find it possible to hack Airtel?