Tuesday, May 6, 2008

Internet meme


The term Internet meme is a neologism used to describe a catchphrase or concept that spreads in a fast way from person to person via the Internet.The term is a reference to the concept of memes, although this concept refers to a much broader category of cultural information.


At its most basic, an Internet meme is simply the propagation of a digital file or hyperlink from one person to others using methods available through the Internet (for example, email, blogs, social networking sites, instant messaging, etc.). The content often consists of a saying or joke, a rumor, an altered or original image, a complete website, a video clip or animation, or an offbeat news story, among many other possibilities. An Internet meme may stay the same or may evolve over time, by chance or through commentary, imitations, and parody versions, or even by collecting news accounts about itself. Internet memes have a tendency to evolve and spread extremely quickly, sometimes going in and out of popularity in a matter of days. They are spread organically, voluntarily, peer to peer, rather than by compulsion, predetermined path, or completely automated means.
The term may refer to the content that spreads from user to user, the idea behind the content, or the phenomenon of its spread. Internet memes have been seen as a form of art.There exist websites that collect and popularize Internet memes as well as sites devoted to the spread of specific Internet memes. The term is generally not applied to content or web services that are seen as legitimate, useful, and non-faddish, or that spread through organized publishing and distribution channels. Thus, serious news stories, videogames, web services, songs by established musical groups, or the like are usually not called Internet memes. Internet Memes over time can show interesting patterns, moving from individual webpages and pictures to user created remakes of popular content.

Internet access


Internet access refers to the means by which users connect to the Internet.
Common methods of internet access include dial-up, landline (over coaxial cable, fiber optic or copper wires), T- lines, Wi-Fi, satellite and cell phones.
Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. Some libraries provide stations that provide facilities for hooking up public-owned laptops to local area networks (LANs). There are also wireless Internet access points in many public places like airport halls, in some cases just for brief use while standing. These Access points may provide coin operated computers or Wi-Fi hot spots* that enable specially equipped laptops to pick up internet service signals. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee based.
Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi-cafes, where a would-be user needs to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. The whole campus or park, or even the entire city can be enabled. Grassroots efforts have led to wireless community networks.
Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular or mobile phone networks, and fixed wireless services. These services have not enjoyed widespread success due to their high cost of deployment, which is passed on to users in high usage fees. New wireless technologies such as WiMAX have the potential to alleviate these concerns and enable simple and cost effective deployment of metropolitan area networks covering large, urban areas. There is a growing trend towards wireless mesh networks, which offer a decentralized and redundant infrastructure and are often considered the future of the Internet.
Broadband access over power lines was approved in 2004 in the United States in the face of stiff resistance from the amateur radio community. The problem with modulating a carrier signal below 100 MHz onto power lines is that an above-ground power line can act as a giant antenna and jam long-distance radio frequencies used by amateurs, seafarers and others. A recent discovery, called "E-Line" allows propagating much higher frequency carriers, from 100 MHz through at least 10 GHz, onto a single conductor of a power line and offers the possibility of very high speed fixed and mobile information services at very low cost without the problems associated with the lower frequency signals.
The use of the Internet around the world has been growing rapidly over the last decade, although the growth rate seems to have slowed somewhat after 2000. The phase of rapid growth is ending in industrialized countries, as usage becomes ubiquitous there, but the spread continues in Africa, Latin America, the Caribbean and the Middle East. One example of a great number of people gaining access to the internet is in Brazil, thanks to lowering taxes on computers and in dial-up providers, Brazilians are growing significantly on the internet in the past 2 years.

Wednesday, April 16, 2008

Top Stories Wage hike to benefit only 5M of 34M workers

QUEZON CITY, Philippines - Only five million of 34 million Filipino workers, or about 15 percent of the total labor force, will benefit from a wage increase, while more than 28 million workers would be left to cope with the rising prices of goods.

This was the assessment released on Wednesday by Ciriaco Lagunzad, executive director of the National Wage and Productivity Commission (NWPC).

According to Lagunzad, the wage increase will not be across the board – that is, only minimum wage earners will get pay hikes determined by the regional wage boards. And those earning more than P350 a day will not be covered the increase.

The revelation prompted two groups to remark that President Gloria Macapagal-Arroyo was merely trying to boost her sagging popularity when she ordered last Monday regional wage boards to implement a wage hike.

This was the view presented in statement issued on Wednesday by the Pambansang Lakas ng Kilusang Mamamalakaya ng Pilipinas (Pamalakaya) and the Unyon ng mgaManggagawa sa Agrikultura (UMA).

The five million workers, the groups said, would not include the bulk of minimum wage earners representing organized and unorganized labor.

"Once again, President Arroyo is taking the Filipino workers to another rollercoaster ride – and to her world of make-believe," Pamalakaya national chairman Fernando Hicap said. "Her call for a wage hike last Monday is fake and was meant to counter the sharp drop in her approval rating.

"Hicap said what workers – both in private and public sectors badly – need a P 125 across-the-board pay hike to cope with the rising prices of food and other necessities.

The two groups said the P350 minimum wage is actually worth P245.61 today based on the present inflation rate.

Based on NWPC findings, each family of six needs P768 per day to survive in Metro Manila. The current P350 minimum wage, which is regularly received non-agricultural workers, is way below of the required amount for a family of six to survive.

The Autonomous Region of Muslim Mindanao (ARMM) has the lowest minimum wage, pegged at P200 a day. A family of six in that region needs P 1,008 a day to survive. But the nominal basic pay of P 200 if translated to a real wage would only be P136.

"Arroyo merely wants to divide the labor sector by announcing wage increase for 15 percent of the population, and denying 85 percent of the country’s labor force of their much needed pay hike," Hicap said.

The wage hike would not cover the 600,000 fish workers in the commercial and aquaculture sectors.

On behalf of agricultural workers, UMA national chairperson Rene Galang, a Hacienda Luisita worker, had this to say: "Mrs. Arroyo merely gave false hopes out of her empty promise. Anyway agricultural workers do not believe her, because for every 10 promises she made, 11 are broken according to her track record as enemy of labor and willing puppet of foreign and local capitalists." -





source : D’Jay Lazaro, GMANews.TV

Friday, April 11, 2008

Internet


The Internet is a worldwide, publicly accessible series of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a "network of networks" that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked web pages and other resources of the World Wide Web (WWW).


Terminology

The Internet and the World Wide Web are not one and the same. The Internet is a collection of interconnected computer networks, linked by copper wires, fiber-optic cables, wireless connections, etc. In contrast, the Web is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. The World Wide Web is one of the services accessible via the Internet, along with various others including e-mail, file sharing, online gaming and others described below. However, informally "the Internet" is often used to refer to the World Wide Web (an example of synecdoche), and it is listed as a synonym in Roget's New Millenium Thesaurus.



Today's Internet

Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is essentially defined by its interconnections and routing policies.
As of December 30, 2007, 1.319 billion people use the Internet according to Internet World Stats. Writing in the Harvard International Review, philosopher N.J. Slabbert, a writer on policy issues for the Washington, D.C.–based Urban Land Institute, has asserted that the Internet is fast becoming a basic feature of global civilization, so that what has traditionally been called "civil society" is now becoming identical with information technology society as defined by Internet use.

Internet protocols

At the lower level (OSI layer 3) is IP (Internet Protocol), which defines the datagrams or packets that carry blocks of data from one node to another. The vast majority of today's Internet uses version four of the IP protocol (i.e. IPv4), and, although IPv6 is standardized, it exists only as "islands" of connectivity, and there are many ISPs without any IPv6 connectivity. ICMP (Internet Control Message Protocol) also exists at this level. ICMP is connectionless; it is used for control, signaling, and error reporting purposes.
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) exist at the next layer up (OSI layer 4); these are the protocols by which data is transmitted. TCP makes a virtual "connection", which gives some level of guarantee of reliability. UDP is a best-effort, connectionless transport, in which data packets that are lost in transit will not be re-sent.
The application protocols sit on top of TCP and UDP and occupy layers 5, 6, and 7 of the OSI model. These define the specific messages and data formats sent and understood by the applications running at each end of the communication. Examples of these protocols are HTTP, FTP, and SMTP.

Internet structure

There have been many analyses of the Internet and its structure. For example, it has been determined that the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.
Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as:
GEANT
GLORIAD
The Internet2 Network (formally known as the Abilene Network)
JANET (the UK's national research and education network)
These in turn are built around relatively smaller networks. See also the list of academic computer network organizations.
In network diagrams, the Internet is often represented by a cloud symbol, into and out of which network communications can pass.



Internet and the workplace

The Internet is allowing greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections and Web applications.
The Internet viewed on mobile devices
The Internet can now be accessed virtually anywhere by numerous means. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet from anywhere there is a cellular network supporting that device's technology.



The World Wide Web

Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but, as discussed above, the two terms are not synonymous.
The World Wide Web is a huge set of interlinked documents, images and other resources, linked by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines that store originals, and cached copies, of these resources to deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the Internet.
Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.
Software products that can access the resources of the Web are correctly termed user agents. In normal use, web browsers, such as Internet Explorer and Firefox, access web pages and allow users to navigate from one to another via hyperlinks. Web documents may contain almost any combination of computer data including photographs, graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations.
Through keyword-driven Internet research using search engines like Yahoo! and Google, millions of people worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.
It is also easier, using the Web, than ever before for individuals and organisations to publish ideas and information to an extremely large audience. Anyone can find ways to publish a web page or build a website for very little initial cost. Publishing and maintaining large, professional websites full of attractive, diverse and up-to-date information is still a difficult and expensive proposition, however.
Many individuals and some companies and groups use "web logs" or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.
Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.
Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow.
In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management system (CMS) or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organisation or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.



Remote access

The Internet allows computer users to connect to other computers and information stores easily, wherever they may be across the world. They may do this with or without the use of security, authentication and encryption technologies, depending on the requirements.
This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information e-mailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice.
An office worker away from his desk, perhaps on the other side of the world on a business trip or a holiday, can open a remote desktop session into his normal office PC using a secure Virtual Private Network (VPN) connection via the Internet. This gives the worker complete access to all of his or her normal files and data, including e-mail and other applications, while away from the office.
This concept is also referred to by some network security people as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into its employees' homes; this has been the source of some notable security breaches, but also provides security for the workers.



Collaboration

The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier. Not only can a group cheaply communicate and test, but the wide reach of the Internet allows such groups to easily form in the first place, even among niche interests. An example of this is the free software movement in software development, which produced GNU and Linux from scratch and has taken over development of Mozilla and OpenOffice.org (formerly known as Netscape Communicator and StarOffice). Films such as Zeitgeist, Loose Change and Endgame have had extensive coverage on the Internet, while being virtually ignored in the mainstream media.
Internet "chat", whether in the form of IRC "chat rooms" or channels, or via instant messaging systems, allow colleagues to stay in touch in a very convenient way when working at their computers during the day. Messages can be sent and viewed even more quickly and conveniently than via e-mail. Extension to these systems may allow files to be exchanged, "whiteboard" drawings to be shared as well as voice and video contact between team members.
Version control systems allow collaborating teams to work on shared sets of documents without either accidentally overwriting each other's work or having members wait until they get "sent" documents to be able to add their thoughts and changes.

File sharing

A computer file can be e-mailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks.
In any of these cases, access to the file may be controlled by user authentication; the transit of the file over the Internet may be obscured by encryption, and money may change hands before or after access to the file is given. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—hopefully fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests.
These simple features of the Internet, over a worldwide basis, are changing the basis for the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.
Internet collaboration technology enables business and project teams to share documents, calendars and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing.



Internet access

Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.
Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services.
High-end mobile phones such as smartphones generally come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used. An Internet access provider and protocol matrix differentiates the methods used to get online.

Social impact

The Internet has made possible entirely new forms of social interaction, activities and organizing, thanks to its basic features such as widespread usability and access.
Social networking websites such as Facebook and MySpace have created a new form of socialization and interaction. Users of these sites are able to add a wide variety of items to their personal pages, to indicate common interests, and to connect with others. It is also possible to find a large circle of existing acquaintances, especially if a site allows users to utilize their real names, and to allow communication among large existing groups of people.
Sites like meetup.com exist to allow wider announcement of groups which may exist mainly for face-to-face meetings, but which may have a variety of minor interactions over their group's site at meetup.org, or other similar sites.

Political organization and censorship

In democratic societies, the Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States became famous for its ability to generate donations via the Internet. Many political groups use the Internet to achieve a whole new method of organizing, in order to carry out Internet activism.
Some governments, such as those of Cuba, Iran, North Korea, Myanmar, the People's Republic of China, and Saudi Arabia, restrict what people in their countries can access on the Internet, especially political and religious content. This is accomplished through software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention.
In Norway, Denmark, Finland and Sweden, major Internet service providers have voluntarily (possibly to avoid such an arrangement being turned into law) agreed to restrict access to sites listed by police. While this list of forbidden URLs is only supposed to contain addresses of known child pornography sites, the content of the list is secretMany countries, including the United States, have enacted laws making the possession or distribution of certain material, such as child pornography, illegal, but do not use filtering software.
There are many free and commercially available software programs with which a user can choose to block offensive websites on individual computers or networks, such as to limit a child's access to pornography or violence.

Leisure activites

The Internet has been a major source of leisure since before the World Wide Web, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much of the main traffic. Today, many Internet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas.
The pornography and gambling industries have both taken full advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. Although many governments have attempted to put restrictions on both industries' use of the Internet, this has generally failed to stop their widespread popularity.
One main area of leisure on the Internet is multiplayer gaming. This form of leisure creates communities, bringing people of all ages and origins to enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing games to online gambling. This has revolutionized the way many people interact and spend their free time on the Internet.
While online gaming has been around since the 1970s, modern modes of online gaming began with services such as GameSpy and MPlayer, to which players of games would typically subscribe. Non-subscribers were limited to certain types of gameplay or certain games.
Many use the Internet to access and download music, movies and other works for their enjoyment and relaxation. As discussed above, there are paid and unpaid sources for all of these, using centralized servers and distributed peer-to-peer technologies. Discretion is needed as some of these sources take more care over the original artists' rights and over copyright laws than others.
Many use the World Wide Web to access news, weather and sports reports, to plan and book holidays and to find out more about their random ideas and casual interests.
People use chat, messaging and e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking websites like MySpace, Facebook and many others like them also put and keep people in contact for their enjoyment.
The Internet has seen a growing number of Internet operating systems, where users can access their files, folders, and settings via the Internet. An example of an opensource webOS is Eyeos.

Complex architecture

Many computer scientists see the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is extremely heterogeneous. (For instance, data transfer rates and physical characteristics of connections vary widely.) The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. Further adding to the complexity of the Internet is the ability of more than one computer to use the Internet through only one node, thus creating the possibility for a very deep and hierarchal sub-network that can theoretically be extended infinitely (disregarding the programmatic limitations of the IPv4 protocol). However, since principles of this architecture date back to the 1960s, it might not be a solution best suited to modern needs, and thus the possibility of developing alternative structures is currently being looked into.
According to a June 2007 article in Discover magazine, the combined weight of all the electrons moved within the Internet in a day is 0.2 millionths of an ounce. Others have estimated this at nearer 2 ounces (50 grams).

Marketing

The Internet has also become a large market for companies; some of the biggest companies today have grown by taking advantage of the efficient nature of low-cost advertising and commerce through the Internet, also known as e-commerce. It is the fastest way to spread information to a vast number of people simultaneously. The Internet has also subsequently revolutionized shopping—for example; a person can order a CD online and receive it in the mail within a couple of days, or download it directly in some cases. The Internet has also greatly facilitated personalized marketing which allows a company to market a product to a specific person or a specific group of people more so than any other advertising medium.
Examples of personalized marketing include online communities such as MySpace, Friendster, Orkut, Facebook and others which thousands of Internet users join to advertise themselves and make friends online. Many of these users are young teens and adolescents ranging from 13 to 25 years old. In turn, when they advertise themselves they advertise interests and hobbies, which online marketing companies can use as information as to what those users will purchase online, and advertise their own companies' products to those users.



The name Internet

Internet is traditionally written with a capital first letter, as it is a proper noun. The Internet Society, the Internet Engineering Task Force, the Internet Corporation for Assigned Names and Numbers, the World Wide Web Consortium, and several other Internet-related organizations use this convention in their publications.
Many newspapers, newswires, periodicals, and technical journals capitalize the term (Internet). Examples include The New York Times, the Associated Press, Time, The Times of India, Hindustan Times, and Communications of the ACM.
Others assert that the first letter should be in lower case (internet), and that the specific article the is sufficient to distinguish "the internet" from other internets. A significant number of publications use this form, including The Economist, the Canadian Broadcasting Corporation, the Financial Times, The Guardian, The Times, and The Sydney Morning Herald. As of 2005, many publications using internet appear to be located outside of North America—although one U.S. news source, Wired News, has adopted the lower-case spelling.
Historically, Internet and internet have had different meanings, with internet meaning "an interconnected set of distinct networks", and Internet referring to the worldwide, publicly available IP internet. Under this distinction, the Internet is the familiar network through which websites exist; however, an internet can exist between any two remote locations.[Any group of distinct networks connected together is an internet; each of these networks may or may not be part of the Internet. The distinction was evident in many RFCs, books, and articles from the 1980s and early 1990s (some of which, such as RFC 1918, refer to "internets" in the plural), but has recently fallen into disuse. Instead, the term intranet is generally used for private networks, whether they are connected to the Internet
Some people use the lower-case term as a medium (like radio or newspaper, e.g. I've found it on the internet), and first letter capitalized as the global network.

Web portal

A web portal is a site that provides a single function via a web page or site. Web portals often function as a point of access to information on the World Wide Web. Portals present information from diverse sources in an unified way. Aside from the search engine standard, web portals offer other services such as e-mail, news, stock prices, infotainment and various other features. Portals provide a way for enterprises to provide a consistent look and feel with access control and procedures for multiple applications, which otherwise would have been different entities altogether. An example of a web portal is Yahoo!
Two broad categorization of portals are a. Horizontal portals (e.g. Yahoo) b. Vertical portals (focused on one functional area. e.g., salesforce.com)
A personal portal is a site on the World Wide Web that typically provides personalized capabilities to its visitors, providing a pathway to other content. It is designed to use distributed applications, different numbers and types of middleware and hardware to provide services from a number of different sources. In addition, business portals are designed to share collaboration in workplaces. A further business-driven requirement of portals is that the content be able to work on multiple platforms such as personal computers, personal digital assistants (PDAs), and cell phones/mobile phones.
A personal or web portal can be integrated with many forum systems.

Why portals?

It is often necessary to have a centralized application that has access to various other applications within the same enterprise to share the information across the applications. Also the various users with different roles accessing the different applications prefer to have a single access point to all of them over the Internet. They like to personalize the applications and have the coupled applications coordinated. Above all, the administrator users like to have administrative tools all in a single place to administer all the applications. All these are achieved through portals. Since all the applications share information through portals, there is better communication between various types of users. Another advantage of portals is that they can make event-driven campaigns. Below is detailed list of advantages of using portals:
Intelligent integration and access to enterprise content, applications and processes
Improved communication and collaboration among customers, partners, and employees
Unified, real-time access to information held in disparate systems
Personalized user modification and maintenance of the website presentation
Below are the properties of portals:
Look and feel
Consistent headers and footers, color schemes, icons and logos which gives the user a feel and sense of consistency, uniformity, and ease of navigation
A portlet is an application within a browser window, displayed in an effective layout
A portlet is itself a web application
Portlets are aggregated by the portal page

Customization
Users control, on an individual basis, a portal’s look and feel by setting portlet layout, look and feel.

Development of personal portals
In the late 1990s, the web portal was a hot commodity. After the proliferation of web browsers in the mid-1990s, many companies tried to build or acquire a portal, to have a piece of the Internet market. The web portal gained special attention because it was, for many users, the starting point of their web browser. Netscape became a part of America Online, the Walt Disney Company launched Go.com, and Excite and @Home became a part of AT&T during the late 1990s. Lycos was said to be a good target for other media companies such as CBS.
Many of the portals started initially as either web directories (notably Yahoo!) and/or search engines (Excite, Lycos, AltaVista, infoseek, Hotbot, and Itsabove among the old ones). Expanding services was a strategy to secure the user-base and lengthen the time a user stayed on the portal. Services which require user registration such as free email, customization features, and chatrooms were considered to enhance repeat use of the portal. Game, chat, email, news, and other services also tend to make users stay longer, thereby increasing the advertising revenue.
The portal craze, with "old media" companies racing to outbid each other for Internet properties, died down with the dot-com flameout in 2000 and 2001. Disney pulled the plug on Go.com, Excite went bankrupt and its remains were sold to iWon.com. Some notable portal sites ― Yahoo!, for instance ― remain successful to this day. To modern dot-com businesses, the portal craze serves as a cautionary tale about the risks of rushing into a market crowded with highly-capitalized but largely undifferentiated me-too companies.
A leading academic institution portal framework is uPortal by JA-SIG.


Regional web portals
Along with the development and success of international personal portals such as Yahoo!, regional variants have also sprung up. Some regional portals contain local information such as weather forecasts, street maps and local business information. Another notable expansion over the past couple of years is the move into formerly unthinkable markets.
"Local content - global reach" portals have emerged not only from countries like Korea (Naver), India (Rediff), China (Sina.com), Romania(Neogen.ro), Greece(in.gr) and Italy (Webplace.it), but in countries like Vietnam where they are very important for learning how to apply e-commerce, e-government, etc. Such portals reach out to the widespread diaspora across the world.


Government web portals
At the end of the dot-com boom in the 1990s, many governments had already committed to creating portal sites for their citizens. In the United States the main portal is USA.gov, in addition to portals developed for specific audiences such as DisabilityInfo.gov; in the United Kingdom the main portals are Directgov (for citizens) and businesslink.gov.uk (for businesses).
Many U.S. states have their own portals which provide direct access to eCommerce applications (e.g., Hawaii Business Express and myIndianaLicense), agency and department web sites, and more specific information about living in, doing business in and getting around the state.
Many U.S. states have chosen to out-source the operation of their portals to third-party vendors. One company that is an example of this is NICUSA which runs 21 state portals.
The National Portal of India provides comprehensive, accurate, reliable and up-to-date information about India and its various facets.
One of the issues that comes up with government web portals is that different agencies often have their own portals and sometimes a statewide portal-directory structure is not sophisticated and deep enough to meet the needs of multiple agencies.


Corporate web portals
Corporate intranets gained popularity during the 1990s. Having access to a variety of company information via a web browser was a new way of working. Intranets quickly grew in size and complexity, and webmasters (many of whom lacked the discipline of managing content and users) became overwhelmed in their duties. It wasn't enough to have a consolidated view of company information, users were demanding personalization and customization. Webmasters, if skilled enough, were able to offer some capabilities, but for the most part ended up driving users away from using the intranet.
The 1990s were a time of innovation for the concept of corporate web portals. Many companies began to offer tools to help webmasters manage their data, applications and information more easily, and through personalized views. Some portal solutions today are able to integrate legacy applications, other portals objects, and handle thousands of user requests.
Today’s corporate portals are sprouting new value-added capabilities for businesses. Capabilities such as managing workflows, increasing collaboration between work groups, and allowing content creators to self-publish their information are lifting the burden off already strapped IT departments.
In addition, most portal solutions today, if architected correctly, can allow internal and external access to specific corporate information using secure authentication or Single sign-on.
JSR168 Standards emerged around 2001. Java Specification Request (JSR) 168 standards allow the interoperability of portlets across different portal platforms. These standards allow portal developers, administrators and consumers to integrate standards-based portals and portlets across a variety of vendor solutions.
Microsoft's SharePoint Portal Server line of products have been gaining popularity among corporations for building their portals, partly due to the tight integration with the rest of the Microsoft Office products. Research by Forrester Research in 2004 shows that Microsoft is the vendor of choice for companies looking for portal server software
In response to Microsoft's strong presence in the portal market, other portal vendors are being acquired, or are challenging their offering. Oracle Corporation, in 2007, released Web Center Suite, a similar product to SharePoint. Web Center Suite has a full line of collaboration tools (blogs, wikis, team spaces, calendaring, email, etc.).
In addition, the popularity of content aggregation is growing and portal solution will continue to evolve significantly over the next few years. The Gartner Group predicts generation 8 portals to expand on the enterprise mash-up concept of delivering a variety of information, tools, applications and access points through a single mechanism.
With the increase in user generated content, disparate data silo's, and file formats, information architects and taxonomist will be required to allow users the ability to tag (classify) the data. This will ultimately cause a ripple effect where users will also be generating adhoc navigation and information flows.
Some useful lessons can be learned from web 2.0 applications such as Netvibes, PageFlakes, Protopage and a new breed of competitors, such as PersonAll, use this angle to enter the market.


Hosted web portals
As corporate portals gained popularity a number of companies began offering them as a hosted service. The hosted portal market fundamentally changed the composition of portals. In many ways they served simply as a tool for publishing information instead of the loftier goals of integrating legacy applications or presenting correlated data from distributed databases. The early hosted portal companies such as Hyperoffice.com or the now defunct InternetPortal.com focused on collaboration and scheduling in addition to the distribution of corporate data. As hosted web portals have risen in popularity their feature set has grown to include hosted databases, document management, email, discussion forums and more. Hosted portals automatically personalize the content generated from their modules to provide a personalized experience to their users. In this regard they have remained true to the original goals of the earlier corporate web portals.

Domain specific portals
A number of portals have come about that are specific to the particular domain, offering access to related companies and services, a prime example of this trend would be the growth in property portals that give access to services such as estate agents, removal firm and solicitors that offer conveyancing.

Forwarding Plane (a.k.a. Data Plane)

For the pure Internet Protocol (IP) forwarding function, router design tries to minimize the state information kept on individual packets. Once a packet is forwarded, the router should no longer retain statistical information about it. It is the sending and receiving endpoints that keeps information about such things as errored or missing packets.
Forwarding decisions can involve decisions at layers other than the IP internetwork layer or OSI layer 3. Again, the marketing term switch can be applied to devices that have these capabilities. A function that forwards based on data link layer, or OSI layer 2, information, is properly called a bridge. Marketing literature may call it a layer 2 switch, but a switch has no precise definition.
Among the most important forwarding decisions is deciding what to do when congestion occurs, i.e., packets arrive at the router at a rate higher than the router can process. Three policies commonly used in the Internet are Tail drop, Random early detection, and Weighted random early detection. Tail drop is the simplest and most easily implemented; the router simply drops packets once the length of the queue exceeds the size of the buffers in the router. Random early detection (RED) probabilistically drops datagrams early when the queue exceeds a configured size. Weighted random early detection requires a weighted average queue size to exceed the configured size, so that short bursts will not trigger random drops.
In routing, the forwarding plane defines the part of the router architecture that decides what to do with packets arriving on an inbound interface. Most commonly, it refers to a table in which it looks up the destination address in the incoming packet header, and retrieves information telling it the outgoing interface(s) to which the receiving element should send it through the internal forwarding fabric of the router. The IP Multimedia Subsystem architecture uses the term transport plane to describe a function roughly equivalent to the routing control plane.
The table also might specify that the packet is discarded. In some cases, the router will return an ICMP "destination unreachable" or other appropriate code. Some security policies, however, dictate that the router should be programmed to drop the packet silently. By dropping filtered packets silently, a potential attacker does not become aware of a target that is being protected.
The incoming forwarding element will also decrement the time-to-live (TTL) field of the packet, and, if the new value is zero, discard the packet. While the IP specification indicates that an ICMP TTL exceeded message should be sent to the originator of the packet (i.e., the node with the source address in the packet), routers may be programmed to drop the packet silently.
Depending on the specific router implementation, the table in which the destination address is looked up could be the routing table (also known as the routing information base), or a separate forwarding information base that is populated (i.e., loaded) by the control plane, but used by the forwarding plane to look up packets, at very high speed, and decide how to handle them. Before or after examining the destination, other tables may be consulted to make decisions to drop the packet based on other characteristics, such as the source address, the IP protocol identifier field, or TCP or UDP port number.
Forwarding plane functions, run in the forwarding element. . High-performance routers often have multiple distributed forwarding elements, so that the router increases performance with parallel processing.
The outgoing interface will encapsulate the packet in the appropriate data link protocol. Depending on the router software and its configuration, functions, usually implemented at the outgoing interface, may set various packet fields, such as the DSCP field used by differentiated services.
In general, the passage from the input interface directly to an output interface, through the fabric with minimum modification at the output interface, is called the fast path of the router. If the packet needs significant processing, such as segmentation or encryption, it may go onto a slower path, which is sometimes called the services plane of the router. Service planes can make forwarding or processing decisions based on higher-layer information, such as a Web URL contained in the packet payload.

Control Plane


Routers are like junctions whereas subnets are like streets and hosts like houses
Control Plane processing leads to the construction of what is variously called a routing table or routing information base (RIB). The RIB may be used by the Forwarding Plane to look up the outbound interface for a given packet, or, depending on the router implementation, the Control Plane may populate a separate Forwarding Information Base (FIB) with destination information. RIBs are optimized for efficient updating with control mechanisms such as routing protocols, while FIBs are optimized for the fastest possible lookup of the information needed to select the outbound interface.
The Control Plane constructs the routing table from knowledge of the up/down status of its local interfaces, from hard-coded static routes, and from exchanging routing protocol information with other routers. It is not compulsory for a router to use routing protocols to function, if for example it was configured solely with static routes. The routing table stores the best routes to certain network destinations, the "routing metrics" associated with those routes, and the path to the next hop router.
Routers do maintain state on the routes in the RIB/routing table, but this is quite distinct from not maintaining state on individual packets that have been forwarded.

In routing, the control plane is the part of the router architecture that is concerned with drawing the network map, or the information in a (possibly augmented) routing table that defines what to do with incoming packets. Control plane functions, such as participating in routing protocols, run in the architectural control element.In most cases, the routing table will contain a list of destination addresses and the outgoing interface(s) associated with them. Control plane logic also can define certain packets to be discarded, as well as preferential treatment of certain packets for which a high quality of service is defined by such mechanisms as differentiated services.
Depending on the specific router implementation, there may be a separate Forwarding Information Base that is populated (i.e., loaded) by the Control Plane, but used by the Forwarding Plane to look up packets, at very high speed, and decide how to handle them.

Single-Pair High-speed Digital Subscriber Line [SHDSL]

Single-Pair high-speed digital subscriber line (SHDSL) is a telecommunications technology for Digital Subscriber Line (DSL) subscriber lines. It describes a transmission method for signals on copper pair lines, being mostly used in access networks to connect subscribers to Telephone exchanges or POP Access Points.
G.SHDSL was standardized in February 2001 internationally by ITU-T with recommendation G.991.2.
G.SHDSL features symmetrical data rates from 192 kbit/s to 2,304 kbit/s of payload in 64 kbit/s increments for one pair and 384 kbit/s to 4,608 kbit/s in 128 kbit/s increments for two pair applications. The reach varies according to the loop rate and noise conditions (more noise or higher rate means decreased reach) and may be up to 3,000 meters. The two pair feature may alternatively be used for increased reach applications by keeping the data rate low (halving the data rate per pair will provide similar speeds to single pair lines while increasing the error/noise tolerance).
The payload may be either 'clear channel' (unstructured), T1 or E1 (full rate or fractional), n x ISDN Basic Rate Access (BRA), Asynchronous Transfer Mode (ATM) or 'dual bearer' mode (i.e. a mixture of two separate streams (e.g. T1 and 'packet based') sharing the payload bandwidth of the G.shdsl loop).
In Europe, a variant of G.SHDSL was standardized by ETSI using the name 'SDSL'. This ETSI variant is not compatible with the ITU-T G.SHDSL standardized regional variant for Europe and must not be confused with the usage of the term 'SDSL' in North America.
The latest standardization efforts (G.SHDSL.bis) tend to allow for flexibly changing the amount of bandwidth dedicated to each transport unit to provide 'dynamic rate repartitioning' of bandwidth demands during the uptime of the interface and optionally provides for 'extended data rates' by using a different modulation method (32-TCPAM instead of 16-TCPAM, where TCPAM is Trellis-Coded Pulse Amplitude Modulation). Also, a new payload type is introduced: packet based, e.g. to allow for Ethernet-frames to be transported natively. (Currently, they may only be framed in ATM or T1/E1/...). G.SHDSL.bis can deliver a minimum of 2 Mbit/s and a maximum of 5.69 Mbit/s over distances of up to 2.7 km (9 Kft).

Integrated Services Digital Network [ISDN]

Integrated Services Digital Network (ISDN), originally "Integriertes Sprach- und Datennetz" (German for "Integrated Speech and Data Net"), is a circuit-switched telephone network system, designed to allow digital transmission of voice and data over ordinary telephone copper wires, resulting in better voice quality than an analog phone. It offers circuit-switched connections (for either voice or data) in increments of 64 kbit/s. One of the major use cases is Internet access, where ISDN typically provides a maximum of 128 kbit/s (which cannot be considered to be a broadband speed). More broadly, ISDN is a set of protocols for establishing and breaking circuit switched connections, and for advanced call features for the user. It was introduced in the late 1980's.
In a videoconference, ISDN provides simultaneous voice, video, and text transmission between individual desktop videoconferencing systems and group (room) videoconferencing systems.

ISDN elements

Integrated Services refers to ISDN's ability to deliver at minimum two simultaneous connections, in any combination of data, voice, video, and fax, over a single line. Multiple devices can be attached to the line, and used as needed. That means an ISDN line can take care of most people's complete communications needs at a much higher transmission rate, without forcing the purchase of multiple analog phone lines.
Digital refers to its purely digital transmission, as opposed to the analog transmission of plain old telephone service (POTS). Use of an analog telephone modem for Internet access requires that the Internet service provider's (ISP) modem converts the digital content to analog signals before sending it and the user's modem then converts those signals back to digital when receiving. When connecting with ISDN there is no analog conversion.
Network refers to the fact that ISDN is not simply a point-to-point solution like a leased line. ISDN networks extend from the local telephone exchange to the remote user and includes all of the telecommunications and switching equipment in between.
The purpose of the ISDN is to provide fully integrated digital services to the users. These services fall under three categories: bearer services, supplementary services and teleservices.

Consumer and industry perspectives

There are two points of view into the ISDN world. The most common viewpoint is that of the end user, who wants to get a digital connection into the telephone/data network from home, whose performance would be better than an ordinary analog modem connection. The typical end-user's connection to the Internet is related to this point of view, and discussion on the merits of various ISDN modems, carriers' offerings and tarriffing (features, pricing) are from this perspective. Much of the following discussion is from this point of view, but it should be noted that as a data connection service, ISDN has been mostly superseded by DSL.
There is a second viewpoint: that of the telephone industry, where ISDN is a core technology. A telephone network can be thought of as a collection of wires strung between switching systems. The common electrical specification for the signals on these wires is T1 or E1. On a normal T1, the signalling is done with A&B bits to indicate on-hook or off-hook conditions and MF and DTMF tones to encode the destination number. ISDN is much better because messages can be sent much more quickly than by trying to encode numbers as long (100 ms per digit) tone sequences. This translated to much faster call setup times, which is greatly desired by carriers who have to pay for line time and also by callers who become impatient while their call hops from switch to switch.
It is also used as a smart-network technology intended to add new services to the public switched telephone network (PSTN) by giving users direct access to end-to-end circuit-switched digital services.

Fiber to the premises [FTTP]

Fiber to the premises (FTTP) is a form of fiber-optic communication delivery in which an optical fiber is run directly onto the customers' premises. This contrasts with other fiber-optic communication delivery strategies such as fiber to the node (FTTN), fiber to the curb (FTTC), or hybrid fibre-coaxial (HFC), all of which depend upon more traditional methods such as copper wires or coaxial cable for "last mile" delivery.
Fiber to the premises can be further categorized according to where the optical fiber ends:
FTTH (fiber to the home) is a form of fiber optic communication delivery in which the optical signal reaches the end user's living or office space.
An optical signal is distributed from the central office over an optical distribution network (ODN). At the endpoints of this network, devices called optical network terminals (ONTs) convert the optical signal into an electrical signal. (For FTTP architectures, these ONTs are located on private property.) The signal usually travels electrically between the ONT and the end-users' devices.


Optical portion
Optical distribution networks have several competing technologies.

Direct fiber


The simplest optical distribution network can be called direct fiber. In this architecture, each fiber leaving the central office goes to exactly one customer. Such networks can provide excellent bandwidth since each customer gets their own dedicated fiber extending all the way to the central office. However, this approach is extremely costly due to the amount of fiber and central office machinery required. It is usually used only in instances where the service area is very small and close to the central office.

Shared fiber


More commonly each fiber leaving the central office is actually shared by many customers. It is not until such a fiber gets relatively close to the customers that it is split into individual customer-specific fibers. There are two competing optical distribution network architectures which achieve this split: active optical networks (AONs) and passive optical networks (PONs)..


Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Due to much lower attenuation and interference, optical fiber has large advantages over existing copper wire in long-distance and high-demand applications. However, infrastructure development within cities was relatively difficult and time-consuming, and fiber-optic systems were complex and expensive to install and operate. Due to these difficulties, fiber-optic communication systems have primarily been installed in long-distance applications, where they can be used to their full transmission capacity, offsetting the increased cost. Since the year 2000, the prices for fiber-optic communications have dropped considerably. The price for rolling out fiber to the home has currently become more cost-effective than that of rolling out a copper based network. Prices have dropped to $850 per subscriber in the US and lower in countries like The Netherlands, where digging costs are low.
Since 1990, when optical-amplification systems became commercially available, the telecommunications industry has laid a vast network of intercity and transoceanic fiber communication lines. By 2002, an intercontinental network of 250,000 km of submarine communications cable with a capacity of 2.56 Tb/s was completed, and although specific network capacities are privileged information, telecommunications investment reports indicate that network capacity has increased dramatically since 2002.
The need for reliable long-distance communication systems has existed since antiquity. Over time, the sophistication of these systems has gradually improved, from smoke signals to telegraphs and finally to the first coaxial cable, put into service in 1940. As these communication systems improved, certain fundamental limitations presented themselves. Electrical systems were limited by their small repeater spacing (the distance a signal can propagate before attenuation requires the signal to be amplified), and the bit rate of microwave systems was limited by their carrier frequency. In the second half of the twentieth century, it was realized that an optical carrier of information would have a significant advantage over the existing electrical and microwave carrier signals.
In 1966 Kao and Hockham proposed optical fibres at STC Laboratories (STL), Harlow, when they showed that the losses of 1000 db/km in existing glass (compared to 5-10 db/km in coaxial cable) was due to contaminants, which could potentially be removed.
The development of lasers in the 1960s solved the first problem of a light source, further development of high-quality optical fiber was needed as a solution to the second. Optical fiber was finally developed in 1970 by Corning Glass Works with attenuation low enough for communication purposes (about 20dB/km), and at the same time GaAs semiconductor lasers were developed that were compact and therefore suitable for fiber-optic communication systems.
After a period of intensive research from 1975 to 1980, the first commercial fiber-optic communication system was developed, which operated at a wavelength around 0.8 µm and used GaAs semiconductor lasers. This first generation system operated at a bit rate of 45 Mbit/s with repeater spacing of up to 10 km.
On 22 April, 1977, General Telephone and Electronics sent the first live telephone traffic through fiber optics, at 6 Mbit/s, in Long Beach, California.
The second generation of fiber-optic communication was developed for commercial use in the early 1980s, operated at 1.3 µm, and used InGaAsP semiconductor lasers. Although these systems were initially limited by dispersion, in 1981 the single-mode fiber was revealed to greatly improve system performance. By 1987, these systems were operating at bit rates of up to 1.7 Gb/s with repeater spacing up to 50 km.
The first transatlantic telephone cable to use optical fiber was TAT-8, based on Desurvire optimized laser amplification technology. It went into operation in 1988.
TAT-8 was developed as the first undersea fiber optic link between the United States and Europe. TAT-8 is more than 3,000 nautical miles (5,600 km) in length and was the first transatlantic cable to use optical fibers. It was designed to handle a mix of information. When inaugurated, it had an estimated lifetime in excess of 20 years. TAT-8 was the first of a new class of cables, even though it had already been used in long-distance land and short-distance undersea operations. Its installation was preceded by extensive deep-water experiments and trials conducted in the early 1980s to demonstrate the project's feasibility.
Third-generation fiber-optic systems operated at 1.55 µm and had loss of about 0.2 dB/km. They achieved this despite earlier difficulties with pulse-spreading at that wavelength using conventional InGaAsP semiconductor lasers. Scientists overcame this difficulty by using dispersion-shifted fibers designed to have minimal dispersion at 1.55 µm or by limiting the laser spectrum to a single longitudinal mode. These developments eventually allowed 3rd generation systems to operate commercially at 2.5 Gbit/s with repeater spacing in excess of 100 km.
The fourth generation of fiber-optic communication systems used optical amplification to reduce the need for repeaters and wavelength-division multiplexing to increase fiber capacity. These two improvements caused a revolution that resulted in the doubling of system capacity every 6 months starting in 1992 until a bit rate of 10 Tb/s was reached by 2001. Recently, bit-rates of up to 14 Tbit/s have been reached over a single 160 km line using optical amplifiers.
The focus of development for the fifth generation of fiber-optic communications is on extending the wavelength range over which a WDM system can operate. The conventional wavelength window, known as the C band, covers the wavelength range 1.53-1.57 µm, and the new dry fiber has a low-loss window promising an extension of that range to 1.30 to 1.65 µm. Other developments include the concept of "optical solitons, " pulses that preserve their shape by counteracting the effects of dispersion with the nonlinear effects of the fiber by using pulses of a specific shape.
In the late 1990s through 2000, the fiber optic communication industry became associated with the dot-com bubble. Industry promoters, and research companies such as KMI and RHK predicted vast increases in demand for communications bandwidth due to increased use of the Internet, and commercialization of various bandwidth-intensive consumer services, such as video on demand. Internet protocol data traffic was said to be increasing exponentially, and at a faster rate than integrated circuit complexity had increased under Moore's Law. From the bust of the dot-com bubble through 2006, however, the main trend in the industry has been consolidation of firms and offshoring of manufacturing to reduce costs.

Technology

Modern fiber-optic communication systems generally include an optical transmitter to convert an electrical signal into an optical signal to send into the optical fiber, a cable containing bundles of multiple optical fibers that is routed through underground conduits and buildings, multiple kinds of amplifiers, and an optical receiver to recover the signal as an electrical signal. The information transmitted is typically digital information generated by computers, telephone systems, and cable television companies.

Cable modem


A cable modem is a type of modem that provides access to a data signal sent over the cable television infrastructure. Cable modems are primarily used to deliver broadband Internet access in the form of cable Internet, taking advantage of unused bandwidth on a cable television network. They are commonly found in Australia, New Zealand, Canada, Europe, the United Kingdom, Costa Rica, and the United States. In USA alone there were 22.5 million cable modem users during the first quarter of 2005, up from 17.4 million in the first quarter of 2004

Modem (from modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from driven diodes to radio.
The most familiar example is a voiceband modem that turns the digital 1s and 0s of a personal computer into sounds that can be transmitted over the telephone lines of Plain Old Telephone Systems (POTS), and once received on the other side, converts those 1s and 0s back into a form used by a USB, Serial, or Network connection. Modems are generally classified by the amount of data they can send in a given time, normally measured in bits per second, or "bps". They can also be classified by Baud, the number of times the modem changes its signal state per second.
Baud is NOT the modem's speed. The baud rate varies, depending on the modulation technique used. Original Bell 103 modems used a modulation technique that saw a change in state 300 times per second. They transmitted 1 bit for every baud, and so a 300-bps modem was also a 300-baud modem. However, casual computerists confused the two. A 300-bps modem is the only modem whose bit rate matches the baud rate. A 2400-bps modem changes state 600 times per second, but due to the fact that it transmits 4 bits for each baud, 2400 bits are transmitted by 600 baud, or changes in states.
Faster modems are used by Internet users every day, notably cable modems and ADSL modems. In telecommunications, "radio modems" transmit repeating frames of data at very high data rates over microwave radio links. Some microwave modems transmit more than a hundred million bits per second. Optical modems transmit data over optical fibers. Most intercontinental data links now use optical modems transmitting over undersea optical fibers. Optical modems routinely have data rates in excess of a billion (1x109) bits per second. One kilobit per second (kbit/s or kb/s or kbps) as used in this article means 1000 bits per second and not 1024 bits per second. For example, a 56k modem can transfer data at up to 56,000 bits per second over the phone line.


Cable modems in the OSI model or TCP/IP model

In network topology, a cable modem is a network bridge that conforms to IEEE 802.1D for Ethernet networking (with some modifications). The cable modem bridges Ethernet frames between a customer LAN and the coax cable network.
With respect to the OSI model, a cable modem is a data link layer forwarder, rather than simply a modem.
A cable modem does support functionalities at other layers. In physical layer , the cable modem supports the Ethernet PHY on its LAN interface, and a DOCSIS defined cable-specific PHY on its HFC cable interface. It is to this cable-specific PHY that the name cable modem refers. In the network layer , the cable modem is a IP host in that it has its own IP address used by the network operator to manage and troubleshoot the device. In the transport layer (or layer 4) the cable modem supports UDP in association with its own IP address, and it supports filtering based on TCP and UDP port numbers to, for example, block forwarding of NetBIOS traffic out of the customer's LAN. In the application layer , the cable modem supports certain protocols that are used for management and maintenance, notably DHCP, SNMP, and TFTP.
Some cable modem devices may incorporate a router along with the cable modem functionality, to provide the LAN with its own IP network addressing. From a data forwarding and network topology perspective, this router functionality is typically kept distinct from the cable modem functionality (at least logically) even though the two may share a single enclosure and appear as one unit. So, the cable modem function will have its own IP address and MAC address as will the router.


Cable modems and VoIP

With the advent of Voice over IP telephony, cable modems can also be used to provide telephone service. Many people who have cable modems have opted to eliminate their Plain Old Telephone Service (POTS). Because most telephone companies do not offer naked DSL (DSL service without a POTS line), VoIP use is higher amongst cable modem users.
A cable modem subscriber can make use of VoIP telephony by subscribing to a third party service (e.g. Vonage or Skype). As an alternative, many cable operators offer a VoIP service based on PacketCable. PacketCable allows MSOs to offer both High Speed Internet and VoIP through a single piece of customer premise equipment, known as an Embedded Multimedia Terminal Adapter (EMTA or E-MTA). An EMTA is basically a cable modem and a VoIP adapter (known as a Multimedia Terminal Adapter) bundled into a single device. PacketCable service has a significant technical advantage over third-party providers in that voice packets are given guaranteed Quality of Service across their entire path so that call quality can be assured.


Hybrid Networks

Hybrid Networks developed, demonstrated and patented the first high speed, asymmetrical cable modem systems in 1990. A key Hybrid Networks insight was that highly asymmetrical communications would be sufficient to satisfy consumers connected remotely to an otherwise completely symmetric high speed data communications network. This was important because it was very expensive to provide high speed in the upstream direction, while the CATV systems already had substantial broadband capacity in the downstream direction. Also key was that it saw that the upstream and downstream communications could be on the same or different communications media using different protocols working in each direction to establish a closed loop communications system. The speeds and protocols used in each direction would be very different. The earliest systems used the public switched telephone network (PSTN) for the return path since very few cable systems were bi-directional. Later systems used cable for the upstream as well as the downstream path.
There was extreme skepticism to this approach initially. In fact, many technical people doubted that it could work at all. Hybrid's system architecture is the way most cable modem systems operate today.

LANcity

LANcity was an early pioneer in cable modems, developing a proprietary system that saw fairly wide deployment in the US. LANcity was sold to Bay Networks which was then acquired by Nortel, which eventually spun the cable modem business off as ARRIS. ARRIS continues to make cable modems and CMTS equipment compliant with the DOCSIS standard.

CDLP

CDLP was a proprietary system that was made by Motorola. CDLP CPE was capable of both PSTN (telephone network) and RF (cable network) return paths. The PSTN return path cable modem service was considered 'one way cable' and had many of the same drawbacks as satellite Internet service, and as a result it quickly gave way to two way cable. Cable modems that used the RF cable network for the return path were considered 'two way cable', and were better able to compete with DSL which was bidirectional. The standard is more or less defunct now with new providers using, and existing providers having changed over to, the DOCSIS standard. The Motorola CDLP Proprietary CyberSURFR is an example of a modem that was built to the CDLP standard, capable of a peak 10 Mbit/s downstream and 1.532 Mbit/s upstream. (CDLP supported a maximum downstream bandwidth of 30 Mbit/s which could be reached by using several modems.)
The Australian ISP BigPond employed this system when it started cable modem trials in 1996. For a number of years cable Internet access was only available to Sydney, Melbourne and Brisbane via CDLP. This network ran parallel to the newer DOCSIS system for a number of years. In 2004 the CDLP network was switched off and now is exclusively DOCSIS.



Wireless broadband


Wireless broadband

Wireless Broadband is a fairly new technology that provides high-speed wireless internet and data network access over a wide area.

The term broadband

According to the 802.16-2004 standard, broadband means 'having instantaneous bandwidth greater than around 1 MHz and supporting data rates greater than about 1.5 Mbit/s. This means that Wireless Broadband features speeds roughly equivalent to wired broadband access, such as that of ADSL or a cable modem.

Technology and speeds

Few WISP's provide download speeds of over 100 Mbit/s; most broadband wireless access services are estimated to have a range of 50 km (30 miles) from a tower.Technologies used include LMDS and MMDS, as well as heavy use of the ISM bands and one particular access technology is being standardized by IEEE 802.16, also known as WiMAX. WiMAX is highly popular in Europe but has not met full acceptance in the United States because cost of deployment does not meet return on investment figures. In 2005 the Federal Communications Commission adopted a Report and Order that revised the FCC’s rules to open the 3650 MHz band for terrestrial wireless broadband operations. On November 14, 2007 the Commission released Public Notice DA 07-4605 in which the Wireless Telecommunications Bureau announced the start date for licensing and registration process for the 3650-3700 MHz band.
Initially, Wireless Internet Service Providers (WISPs) were only found in rural areas not covered by cable or DSL. These early WISPs would employ a high-capacity T-carrier, such as a T1 or DS3 connection, and then broadcast the signal from a high elevation, such as at the top of a water tower. To receive this type of Internet connection, consumers mount a small dish to the roof of their home or office and point it to the transmitter. Line of sight was usually necessary for this type of technology, but technologies by Motorola have not required this general rule .

Mobile wireless broadband
Wireless broadband technologies also include new services from companies such as Verizon, Sprint, and AT&T Mobility, which allow a more mobile version of this broadband access. Consumers can purchase a PC card, laptop card, or USB equipment to connect their PC or laptop to the Internet via cell phone towers. This type of connection would be stable in almost any area that could also receive a strong cell phone connection. These connections can cost more for portable convenience as well as having speed limitations in all but urban environments.


Digital subscriber line [DSL]



DSL or xDSL, is a family of technologies that provide digital data transmission over the wires of a local telephone network. DSL originally stood for digital subscriber loop, although in recent years, many have adopted digital subscriber line as a more marketing-friendly term for the most popular version of consumer-ready DSL, ADSL. DSL uses high frequency; regular telephone uses low frequency.
Typically, the download speed of consumer DSL services ranges from 512 kilobits per second (kbit/s) to 24,000 kbit/s, depending on DSL technology, line conditions and service level implemented. Typically, upload speed is lower than download speed for Asymmetric Digital Subscriber Line (ADSL) and equal to download speed for Symmetric Digital Subscriber Line (SDSL).


Digital subscriber line technology was originally implemented as part of the ISDN specification, which is later reused as IDSL. Higher speed DSL connections like HDSL and SDSL have been developed to extend the range of DS1 services on copper lines. Consumer oriented ADSL is designed to operate also on a BRI ISDN line, which itself is a form of DSL, as well as on an analog phone line.
DSL, like many other forms of communication, stems directly from Claude Shannon's seminal 1948 scientific paper: A Mathematical Theory of Communication. Employees at Bellcore (now Telcordia Technologies) developed ADSL in 1988 by placing wideband digital signals above the existing baseband analog voice signal carried between telephone company central offices and customers on conventional twisted pair cabling.[1]
U.S. telephone companies promote DSL to compete with cable modems. DSL service was first provided over a dedicated "dry loop", but when the FCC required the incumbent local exchange carriers ILECs to lease their lines to competing providers such as Earthlink, shared-line DSL became common. Also known as DSL over Unbundled Network Element , this allows a single pair to carry data (via a digital subscriber line access multiplexer [DSLAM]) and analog voice (via a circuit switched telephone switch) at the same time. Inline low-pass filter/splitters keep the high frequency DSL signals out of the user's telephones. Although DSL avoids the voice frequency band, the nonlinear elements in the phone would otherwise generate audible intermodulation products and impair the operation of the data modem.
Older ADSL standards can deliver 8 Mbit/s to the customer over about 2 km (1.25 miles) of unshielded twisted pair copper wire. The latest standard, ADSL2+, can deliver up to 24 Mbit/s, depending on the distance from the DSLAM. Distances greater than 2 km (1.25 miles) significantly reduce the bandwidth usable on the wires, thus reducing the data rate. By using an ADSL loop extender, these distances can be increased substantially.


Most residential and small-office DSL implementations reserve low frequencies for POTS service, so that with suitable filters and/or splitters the existing voice service continues to operate independent of the DSL service. Thus POTS-based communications, including fax machines and analog modems, can share the wires with DSL. Only one DSL "modem" can use the subscriber line at a time. The standard way to let multiple computers share a DSL connection is to use a router that establishes a connection between the DSL modem and a local Ethernet, Powerline, or Wi-Fi network on the customer's premises.
Once upstream and downstream channels are established, they are used to connect the subscriber to a service such as an Internet service provider.
Dry-loop DSL or "naked DSL," which does not require the subscriber to have traditional land-line telephone service, started making a comeback in the US in 2004 when Qwest started offering it, closely followed by Speakeasy. As a result of AT&T's merger with SBC, and Verizon's merger with MCI, those telephone companies are required to offer naked DSL to consumers.
Even without the regulatory mandate, however, many ILECs offer naked DSL to consumers. The number of telephone landlines in the US has dropped from 188 million in 2000 to 172 million in 2005, while the number of cellular subscribers has grown to 195 million. . This lack of demand for landline service has resulted in the expansion of naked DSL availability.



Typical setup and connection procedures

The first step is the physical connection. On the customer side, the DSL Tranceiver, or ATU-R, or more commonly known as a DSL modem, is hooked up to a phone line. Modems actually modulate and demodulate a signal, where the DSL Transceiver is actually a radio signal transmit and receive unit. The telephone company(telco) connects the other end of the line to a DSLAM, which concentrates a large number of individual DSL connections into a single box. The location of the DSLAM depends on the telco, but it cannot be located too far from the user because of attenuation, the loss of data due to the large amount of electrical resistance encountered as the data moves between the DSLAM and the user's DSL modem. It is common for a few residential blocks to be connected to one DSLAM. When the DSL modem is powered up, it goes through a sync procedure. The actual process varies from modem to modem but can be generally described as:
The DSL Transceiver does a self-test.
The DSL Transceiver checks the connection between the DSL Transceiver and the computer. For residential variations of DSL, this is usually the Ethernet port or a USB port; in rare models, a FireWire port is used. Older DSL modems sported a native ATM interface (usually, a 25 Mbit serial interface). Also, some variations of DSL (such as SDSL) use synchronous serial connections.
The DSL Transceiver then attempts to synchronize with the DSLAM. Data can only come into the computer when the DSLAM and the modem are synchronized. The synchronization process is relatively quick (in the range of seconds) but is very complex, involving extensive tests that allow both sides of the connection to optimize the performance according to the characteristics of the line in use. External, or stand-alone modem units have an indicator labeled "CD", "DSL", or "LINK", which can be used to tell if the modem is synchronized. During synchronization the light flashes; when synchronized, the light stays lit, usually with a green color.
Modern DSL gateways have more functionality and usually go through an initialization procedure that is very similar to a PC starting up. The system image is loaded from the flash memory; the system boots, synchronizes the DSL connection and establishes the IP connection between the local network and the service provider, using protocols such as DHCP or PPPoE. The system image can usually be updated to correct bugs, or to add new functionality.



DSL technologies

The line length limitations from telephone exchange to subscriber are more restrictive for higher data transmission rates. Technologies such as VDSL provide very high speed, short-range links as a method of delivering "triple play" services (typically implemented in fiber to the curb network architectures). Technologies likes GDSL can further increase the data rate of DSL.
Example DSL technologies (sometimes called xDSL) include:
ISDN Digital Subscriber Line (IDSL), uses ISDN based technology to provide data flow that is slightly higher than dual channel ISDN.
High Data Rate Digital Subscriber Line (HDSL / HDSL2), was the first DSL technology that uses a higher frequency spectrum of copper, twisted pair cables.
Symmetric Digital Subscriber Line (SDSL / SHDSL), the volume of data flow is equal in both directions.
Symmetric High-speed Digital Subscriber Line (G.SHDSL), a standardised replacement for early proprietary SDSL.
Asymmetric Digital Subscriber Line (ADSL), the volume of data flow is greater in one direction than the other.
Rate-Adaptive Digital Subscriber Line (RADSL)
Very High Speed Digital Subscriber Line (VDSL)
Very High Speed Digital Subscriber Line 2 (VDSL2), an improved version of VDSL
Etherloop Ethernet Local Loop
Uni Digital Subscriber Line (UDSL), technology developed by Texas Instruments, backwards compatible with all DMT standards
Gigabit Digital Subscriber Line (GDSL), based on binder MIMO technologies.

Asymmetric Digital Subscriber Line

Asymmetric Digital Subscriber Line (ADSL) is a form of DSL, a data communications technology that enables faster data transmission over copper telephone lines than a conventional voiceband modem can provide. It does this by utilizing frequencies that are not used by a voice telephone call. A splitter - or microfilter - allows a single telephone connection to be used for both ADSL service and voice calls at the same time. Because phone lines vary in quality and were not originally engineered with DSL in mind, it can generally only be used over short distances, typically less than 3mi (5 km).
At the telephone exchange the line generally terminates at a DSLAM where another frequency splitter separates the voice band signal for the conventional phone network. Data carried by the ADSL is typically routed over the telephone company's data network and eventually reaches a conventional internet network. In the UK under British Telecom the data network in question is its ATM network which in turn sends it to its IP network IP Colossus.

The distinguishing characteristic of ADSL over other forms of DSL is that the volume of data flow is greater in one direction than the other, i.e. it is asymmetric. Providers usually market ADSL as a service for consumers to connect to the Internet in a relatively passive mode: able to use the higher speed direction for the "download" from the Internet but not needing to run servers that would require high speed in the other direction.
There are both technical and marketing reasons why ADSL is in many places the most common type offered to home users. On the technical side, there is likely to be more crosstalk from other circuits at the DSLAM end (where the wires from many local loops are close to each other) than at the customer premises. Thus the upload signal is weakest at the noisiest part of the local loop, while the download signal is strongest at the noisiest part of the local loop. It therefore makes technical sense to have the DSLAM transmit at a higher bit rate than does the modem on the customer end. Since the typical home user in fact does prefer a higher download speed, the telephone companies chose to make a virtue out of necessity, hence ADSL. On the marketing side, limiting upload speeds limits the attractiveness of this service to business customers, often causing them to purchase higher cost Digital Signal 1 services instead. In this fashion, it segments the digital communications market between business and home users

How ADSL works

On the wire

Currently, most ADSL communication is full duplex. Full duplex ADSL communication is usually achieved on a wire pair by either frequency division duplex (FDD), echo canceling duplex (ECD), or time division duplexing (TDD). FDM uses two separate frequency bands, referred to as the upstream and downstream bands. The upstream band is used for communication from the end user to the telephone central office. The downstream band is used for communicating from the central office to the end user. With standard ADSL (annex A), the band from 25.875 kHz to 138 kHz is used for upstream communication, while 138 kHz – 1104 kHz is used for downstream communication. Each of these is further divided into smaller frequency channels of 4.3125 kHz. During initial training, the ADSL modem tests which of the available channels have an acceptable signal-to-noise ratio. The distance from the telephone exchange, noise on the copper wire, or interference from AM radio stations may introduce errors on some frequencies. By keeping the channels small, a high error rate on one frequency thus need not render the line unusable: the channel will not be used, merely resulting in reduced throughput on an otherwise functional ADSL connection.
Vendors may support usage of higher frequencies as a proprietary extension to the standard. However, this requires matching vendor-supplied equipment on both ends of the line, and will likely result in crosstalk issues that affect other lines in the same bundle.
There is a direct relationship between the number of channels available and the throughput capacity of the ADSL connection. The exact data capacity per channel depends on the modulation method used.
A common error is to attribute the A in ADSL to the word asynchronous. ADSL technologies use a synchronous framed protocol for data transmission on the wire.

Symmetric Digital Subscriber Line

Symmetric Digital Subscriber Line (SDSL) is a Digital Subscriber Line (DSL) variant with E1-like data rates (72 to 2320 kbit/s). It runs over one pair of copper wires, with a maximum range of about 3 kilometers or 1.86 miles. The main difference between ADSL and SDSL is that SDSL has the same upstream data rate as downstream (symmetrical), whereas ADSL always has smaller upstream bandwidth (asymmetrical). However, unlike ADSL, it can't co-exist with a conventional voice service on the same pair as it takes over the entire bandwidth. It typically falls between ADSL and T-1/E-1 in price, and it is mainly targeted at small and medium businesses who may host a server on site, (eg a Terminal Server or Virtual Private Network) and want to use DSL, but don't need the higher performance of a leased line.
SDSL was never properly standardised until Recommendation G.991.2 (ex-G.shdsl) was approved by ITU-T. SDSL is often confused with G.SHDSL; in Europe, G.SHDSL was standardized by ETSI using the name 'SDSL'. This ETSI variant is compatible with the ITU-T G.SHDSL standardized regional variant for Europe.
SDSL equipment usually only interoperates with devices from the same vendor, though devices from other vendors using the same DSL chipset may be compatible. Most new installations use G.SHDSL equipment instead of SDSL.