Showing posts with label Wireless. Show all posts
Showing posts with label Wireless. Show all posts

HLR , VLR


A Home Location Register (HLR) is a database of user (subscriber) information, i.e., customer profiles, used in mobile (cellular) networks. It is a key component of mobile networks such as GSM, TDMA, and CDMA networks. A HLR contains user information such as account information, account status, user preferences, features subscribed to by the user, user’s current location, etc. The data stored in HLRs for the different types of networks is similar but does differ in some details.
 
HLRs are used by the Mobile Switching Centers (MSCs) to originate and deliver arriving mobile calls.
 
A Visiting Location Register (VLR) is a database, similar to a HLR, which is used by the mobile network to temporarily held profiles of roaming users. This VLR data is based on the user information retrieved from a HLR. MSCs use a VLR to handle roaming users.
 
How HLR & VLR are used
Each mobile network has its own HLRs and VLRs. When a MSC detects a mobile user’s presence in the area covered by its network, it first checks a database to determine if the user is in his/her home area or is roaming, i.e., the user is a visitor.
·         User in Home Area: HLR has the necessary information for initiating, terminating, or receiving a call.
·         User is Roaming: VLR contacts the user’s HLR to get the necessary information to set up a temporary user profile.
 
The user’s location is recorded in the HLR, and in case the user is roaming, it is also recorded in the VLR.
 
In case the user wants to make a call:
·         User in Home Area: MSC contacts the HLR prior to setting up the call.
·         User is Roaming: MSC contacts the VLR prior to setting up the call.
 
In case there is a call for the user (call goes to the home MSC):
·         User in Home Area: Home MSC delivers the call immediately.
·         User is Roaming: Home MSC contacts the VLR to determine the appropriate switch in the roaming area to handle the arriving call and then transfers the call to the roaming area MSC.
 
Issues with HLRs:
·      Slow Performance, due to lookups.
·     Questionable adaptability in handling different types of networks, including 3G networks.
·         Limited capability/data to support user authentication.
·         Limited Support for data backups, fault tolerance, and reliability.
·         Scalability.

Firewalls


A firewall is a software or hardware-based network security system that controls the incoming and outgoing network traffic by analyzing the data packets and determining whether they should be allowed through or not, based on a rule set. Generally, firewalls are configured to protect against unauthenticated interactive logins from the outside world. This helps prevent hackers from logging into machines on a network.

Firewalls also provide logging and auditing functions; often they provide summaries to the administrator about what type/volume of traffic has been processed through.

Network Layer Firewalls

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through unless they match the established rule set. Network layer firewalls generally make their decisions based on the source address, destination address and ports in individual IP packets.

“Stateful” network layer firewalls maintain context about active sessions, and use that "state information" to speed packet processing. If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing.

“Stateless” network layer firewalls require less memory, and can be faster for simple filters that require less time to filter than to look up a session. However, they cannot make more complex decisions based on what stage communications between hosts have reached.

Application Layer Firewalls

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application.

Application firewalls function by determining whether a process should accept any given connection. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers of the OSI model. Also, application firewalls further filter connections by examining the process ID of data packets against a ruleset for the local process involved in the data transmission.

Proxies

A proxy server (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, while blocking other packets. A proxy server is a gateway from one network to another for a specific network application, in the sense that it functions as a proxy on behalf of the network user.

Computers establish a connection to the proxy, which serves as an intermediary, and initiate a new network connection on behalf of the request. This prevents direct connections between systems on either side of the firewall and makes it harder for an attacker to discover where the network is, because they will never receive packets directly by their target system.

Telecommunications Management Network


The Telecommunications Management Network (TMN) is a protocol model defined by ITU-T for managing open systems in a communications network. It is part of the ITU-T recommendation series M.3000 and is based on the OSI management specifications in ITU-T recommendation series X.700.

TMN provides a framework for achieving interconnectivity and communication across heterogeneous operations systems and telecommunication networks. To achieve this, TMN defines a set of interface points for elements which perform the actual communications processing (such as a call processing switch). It also allows for management workstations, to monitor and control them. The standard interface allows elements from different manufacturers to be incorporated into a network under a single management control.

For communication between Operations Systems and NEs (Network Elements), it uses the Common management information protocol (CMIP) or Mediation devices when it uses Q3 interface.

The TMN layered organization is used as the basis for the management software of ISDN, B-ISDN, ATM, SDH/SONET and GSM networks. It is not as commonly used for purely packet-switched data networks such as GPRS.

Modern telecom networks offer automated management functions and are run by OSS software. These manage modern telecom networks and provide the data that is needed in the day-to-day running of a telecom network. OSS software is also responsible for issuing commands to the network infrastructure to activate new service offerings, commence services for new customers, detect and correct network faults.

The framework identifies four logical layers of network management:
 
· Business management – Includes the functions related to business aspects, analyzes trends and quality issues, for example, or to provide a basis for billing and other financial reports.

· Service management – Handles services in the network: definition, administration and charging of services.

· Network management – Distributes network resources, performs tasks of: configuration, control and supervision of the network.

· Element management – Handles individual network elements including alarm management, handling of information, backup, logging, and maintenance of hardware and software.
 
A network element provides agent services, mapping the physical aspects of the equipment into the TMN framework.

Network forensics


Network forensics is a sub-branch of digital forensics relating to the monitoring and analysis of computer network traffic for the purposes of information gathering, legal evidence, or intrusion detection. Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information, making network forensics often a pro-active investigation.

Network forensics generally has two uses:

The first, relating to security, involves monitoring a network for anomalous traffic and identifying intrusions. An attacker might be able to erase all log files on a compromised host; network-based evidence might therefore be the only evidence available for forensic analysis. 

The second form of Network forensics relates to law enforcement. In this case analysis of captured network traffic can include tasks such as reassembling transferred files, searching for keywords and parsing human communication such as emails or chat sessions. 

Two systems are commonly used to collect network data:

"Catch-it-as-you-can" - This is where all packets passing through certain traffic point are captured and written to large storage with analysis being done subsequently in batch mode. 

"Stop, look and listen" - This is where each packet is analyzed by a faster processor in a rudimentary way in memory and only certain information saved for future analysis.

Types
Ethernet – Applying forensic methods on the Ethernet layer is done by eavesdropping bit streams with tools called monitoring tools or sniffers. The most common tool on this layer is Wireshark (formerly known as Ethereal). It collects all data on this layer and allows the user to filter for different events. With these tools, websites, email attachments and more that have been transmitted over the network can be reconstructed. An advantage of collecting this data is that it is directly connected to a host. If, for example, the IP address or the MAC address of a host at a certain time is known, all data for or from this IP or MAC address can be filtered.

 TCP/IP – For the correct routing of packets through the network (e.g., the Internet), every intermediate router must have a routing table which is the best source of information if investigating a digital crime. To do this, it is necessary to reverse the sending route of the attacker, follow the packets, and find where the computer the packet came from (i.e., the source of the attacker).

Another source of evidence on this layer is authentication logs. They show which account and which user was associated with an activity and may reveal who was the attacker or at least sets limits to the people who come into consideration of being the attacker.

The Internet – The internet can be a rich source of digital evidence including web browsing, email, newsgroup, synchronous chat and peer-to-peer traffic.

Wireless forensics is a sub-discipline of network forensics. The main goal of wireless forensics is to provide the methodology and tools required to collect and analyze (wireless) network traffic that can be presented as valid digital evidence in a court of law. The evidence collected can correspond to plain data or, with the broad usage of Voice-over-IP (VoIP) technologies, especially over wireless, can include voice conversations.
 

Second Screen or Multi Screen


Second screen, sometimes also referred to as "companion device" (or "companion apps" when referring to a software applications), is a term that refers to an additional electronic device (e.g. tablet, smartphone) that allows a television audience to interact with the content they are consuming, such as TV shows, movies, music, or video games. Extra data is displayed on a portable device synchronized with the content being viewed on television.

Several studies show a clear tendency of the consumer to use a device while watching television. They show high use of tablet or smartphone when watching television, and indicate a high percentage of comments or posts on social networks being about the content that's being watched.

Based on these studies, many companies both in content production and advertising have adapted their delivery content to the lifestyle of the consumer in order to get maximum attention and thus profits. Applications are becoming a natural extension of television programming, both live and on demand.

Applications

Many applications in the "second screen" are designed to give the consumer another way of interactivity. They also give the media companies another way to sell advertising content. Some examples:

·         Transmission of the Master's Golf Tournament, application for the iPhone (rating information and publicity)
·         TV programs broadcast live tweets and comment.
·         Synchronization of audiovisual content via web advertising.
·         Applications that extend the content information.
·         Shows that add on their websites, content devoted exclusively to the second screen.
·         Applications that synchronize the content being viewed to the portable device.
·         Video game console playing with extra data, such as a map or strategy data that synchronize with the content being viewed to the portable device.
·         TV discovery application with recommendation, EPG (live content), personalization.

Sports Broadcasting

Sports broadcasters, to stem the flight of the TV audience away from watching the main screen (new name for the television) to the second screen, are offering alternative and enhanced content to the main program. The idea is to present content related to the main program, such as unseen moments, alternative information, soundtrack, and characters. New technologies allow the viewer to see different camera angles while watching the game.
            

iBurst


Burst (or HC-SDMA, High Capacity Spatial Division Multiple Access) is a wireless broadband technology which optimizes the use of its bandwidth with the help of smart antennas.

Description

HC-SDMA was announced as considered by ISO TC204 WG16 for the continuous communications standards architecture, known as Communications, Air-interface, Long and Medium range (CALM), which ISO is developing for intelligent transport systems (ITS). ITS may include applications for public safety, network congestion management during traffic incidents, automatic toll booths, and more.

The HC-SDMA interface provides wide-area broadband wireless data-connectivity for fixed, portable and mobile computing devices and appliances. The protocol is designed to be implemented with smart antenna array techniques (called MIMO for multiple-input multiple-output) to substantially improve the radio frequency (RF) coverage, capacity and performance for the system.

Technology

The HC-SDMA interface operates on a similar premise as cellular phones, with hand-offs between HC-SDMA cells repeatedly providing the user with a seamless wireless Internet access even when moving at the speed of a car or train.

The protocol:

·         specifies base station and client device RF characteristics, including output power levels, transmit frequencies and timing error, pulse shaping, in-band and out-of band spurious emissions, receiver sensitivity and selectivity;

·         defines associated frame structures for the various burst types including standard uplink and downlink traffic, paging and broadcast burst types;

·         specifies the modulation, forward error correction, interleaving and scrambling for various burst types;

·         describes the various logical channels (broadcast, paging, random access, configuration and traffic channels) and their roles in establishing communication over the radio link; and

·         specifies procedures for error recovery and retry.

The protocol also supports Layer 3 (L3) mechanisms for creating and controlling logical connections (sessions) between client device and base including registration, stream start, power control, handover, link adaptation, and stream closure, as well as L3 mechanisms for client device authentication and secure transmission on the data links.

Usage

Various options are already commercially available using:

·         Desktop modem with USB and Ethernet ports (with external power supply)
·         Portable USB modem (using USB power supply)
·         Laptop modem (PC card)
·         Wireless Residential Gateway
·         Mobile Broadband Router

Assisted GPS


Assisted GPS, generally abbreviated as A-GPS or aGPS, is a system that can under certain conditions improve the startup performance, or time-to-first-fix (TTFF), of a GPS satellite-based positioning system. It is used extensively with GPS-capable cellular phones to make the location of a cell phone available to emergency call dispatchers.

"Standalone" or "autonomous" GPS operation uses radio signals from satellites alone. In very poor signal conditions, for example in a city, these signals may suffer multipath propagation where signals bounce off buildings, or are weakened by passing through atmospheric conditions, walls, or tree cover. When first turned on in these conditions, some standalone GPS navigation devices may not be able to fix a position due to the fragmentary signal, rendering them unable to function until a clearer signal can be received continuously for a long enough period of time.

An assisted GPS system can address these problems by using data available from a network to locate and use the satellites in poor signal conditions. For billing purposes, network providers often count this as a data access, which can cost money depending on the plan.

Basic Concepts
Standalone GPS provides first position in approximately 30–40 seconds. A Standalone GPS system needs orbital information of the satellites to calculate the current position. The data rate of the satellite signal is only 50 bits/s, so downloading orbital information like ephemeris and almanac directly from satellites typically takes a long time, and if the satellite signals are lost during the acquisition of this information, it is discarded and the standalone system has to start from scratch. In AGPS, the Network Operator deploys an AGPS server. These AGPS servers download the orbital information from the satellite and store it in the database. An AGPS capable device can connect to these servers and download this information using Mobile Network radio bearers such as GSM, CDMA, WCDMA, LTE or even using other wireless radio bearers such as Wi-Fi. Usually the data rate of these bearers is high; hence downloading orbital information takes less time.

AGPS has two modes of operation:

Mobile Station Assisted (MSA)

In MSA mode A-GPS operation, the A-GPS capable device receives acquisition assistance, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and sends the measurements to the A-GPS server. The A-GPS server calculates the position and sends it back to the A-GPS device.

Mobile Station Based (MSB)

In MSB mode A-GPS operation, the A-GPS device receives ephemeris, reference location, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and calculates the position.

Many mobile phones combine A-GPS and other location services including Wi-Fi Positioning System and cell-site multilateration and sometimes a hybrid positioning system.

Wireless Ad hoc


A wireless ad hoc network is a decentralized type of wireless network. The network is ad hoc because it does not rely on pre-existing routing, instead, each node participates in routing by forwarding data to other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data.

An ad hoc network typically refers to any set of networks where all devices have equal status on a network and are free to associate with any other ad hoc network devices in link range. Very often, ad hoc network refers to a mode of operation of IEEE 802.11 wireless networks.

Application
The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes can't be relied on, and may improve the scalability of networks compared to wireless managed networks.

Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like natural disasters or military conflicts. The presence of dynamic and adaptive routing protocols enables ad hoc networks to be formed quickly.

Wireless ad hoc networks can be further classified by their application:
·         mobile ad hoc networks (MANET)
·         wireless mesh networks (WMN)
·         wireless sensor networks (WSN)

Technical requirements

An ad hoc network is made up of multiple “nodes” connected by “links”.

Links are influenced by the node's resources (e.g. transmitter power, computing power and memory) and by behavioral properties (e.g. reliability), as well as by link properties (e.g. length-of-link and signal loss, interference and noise). Since links can be connected or disconnected at any time, a functioning network must be able to cope with this dynamic restructuring, preferably in a way that is timely, efficient, reliable, robust and scalable.

The network must allow any two nodes to communicate, by relaying the information via other nodes. A “path” is a series of links that connects two nodes. Various routing methods use one or two paths between any two nodes; flooding methods use all or most of the available paths.

Remote Radio Head


A remote radio head is an operator radio control panel that connects to a remote radio transceiver via electrical or wireless interface. When used to describe aircraft radio cockpit radio systems, this control panel is often called the radio head.

Current and future generations of wireless cellular systems feature heavy use of Remote Radio Heads (RRHs) in the base stations. Instead of hosting a bulky base station controller close to the top of antenna towers, new wireless networks connect the base station controller and remote radio heads through lossless optical fibers. The interface protocol that enables such a distributed architecture is called Common Publish Radio Interface (CPRI). With this new architecture, RRHs offload intermediate frequency (IF) and radio frequency (RF) processing from the base station. Furthermore, the base station and RF antennas can be physically separated by a considerable distance, providing much needed system deployment flexibility.

Typical advanced processing algorithms on RRHs include digital up-conversion and digital down-conversion (DUC and DDC), crest factor reduction (CFR), and digital pre-distortion (DPD). DUC interpolates base band data to a much higher sample rate via a cascade of interpolation filters. It further mixes the complex data channels with IF carrier signals so that RF modulation can be simplified. CFR reduces the peak-to-average power ratio of the data so it does not enter the non-linear region of the RF power amplifier. DPD estimates the distortion caused by the non-linear effect of the power amplifier and pre-compensates the data.

More importantly, many wireless standards demand re-configurability in both the base station and the RRH. For example, the 3GPP Long Term Evolution (LTE) and WiMax systems both feature scalable bandwidth. The RRH should be able to adjust – at run time – the bandwidth selection, the number of channels, the incoming data rate, among many other things.

RRH system model

Typically, a base station connects to a RRH via optical cables. On the downlink direction, base band data is transported to the RRH via CPRI links. The data is then up-converted to IF sample rates, preprocessed by CFR or DPD to mitigate non-linear effects of broadband power amplifiers, and eventually sent for radio transmission. A typical system is shown in Figure 1.

Remote Radio Head


A remote radio head is an operator radio control panel that connects to a remote radio transceiver via electrical or wireless interface. When used to describe aircraft radio cockpit radio systems, this control panel is often called the radio head. 

Current and future generations of wireless cellular systems feature heavy use of Remote Radio Heads (RRHs) in the base stations. Instead of hosting a bulky base station controller close to the top of antenna towers, new wireless networks connect the base station controller and remote radio heads through lossless optical fibers. The interface protocol that enables such a distributed architecture is called Common Publish Radio Interface (CPRI). With this new architecture, RRHs offload intermediate frequency (IF) and radio frequency (RF) processing from the base station. Furthermore, the base station and RF antennas can be physically separated by a considerable distance, providing much needed system deployment flexibility.

Typical advanced processing algorithms on RRHs include digital up-conversion and digital down-conversion (DUC and DDC), crest factor reduction (CFR), and digital pre-distortion (DPD). DUC interpolates base band data to a much higher sample rate via a cascade of interpolation filters. It further mixes the complex data channels with IF carrier signals so that RF modulation can be simplified. CFR reduces the peak-to-average power ratio of the data so it does not enter the non-linear region of the RF power amplifier. DPD estimates the distortion caused by the non-linear effect of the power amplifier and pre-compensates the data.

More importantly, many wireless standards demand re-configurability in both the base station and the RRH. For example, the 3GPP Long Term Evolution (LTE) and WiMax systems both feature scalable bandwidth. The RRH should be able to adjust – at run time – the bandwidth selection, the number of channels, the incoming data rate, among many other things.

RRH system model

Typically, a base station connects to a RRH via optical cables. On the downlink direction, base band data is transported to the RRH via CPRI links. The data is then up-converted to IF sample rates, preprocessed by CFR or DPD to mitigate non-linear effects of broadband power amplifiers, and eventually sent for radio transmission.

Hotspot Wi - Fi


Wireless internet really made the technological life easy and convenient. There are different methods or technologies to use the wireless internet everywhere and continue our regular and important work related to internet technology. One of these technologies is the Wi-Fi hot spots.  

A hotspot is a site that offers Internet access over a wireless local area network through the use of a router connected to a link to an Internet service provider. Hotspots typically use Wi-Fi technology. With the help of our mobile devices we can access the wireless network through hot spots from coffee shops, malls etc.

To set up a hotspot, all we need is

1. a hotspot kit (hardware, software and remote monitoring device)

2. a high speed internet connection (DSL, T1 or DS3)
 
The type of hotspot kit depends on whether we need a Single Access Point or Multiple Access Point. If we are going for multiple access points, the area where we want to deploy the network has to be considered – whether it is a multi-storey building or a medium sized hotel.

Setting up a Hotspot

If we already have a network held together by Ethernet and now want to upgrade to a wireless hotspot, we need to purchase a Wireless Access Point and join it with the Ethernet network.

If we are starting from the scratch, what we need is a Wireless Access Point Router. This kit contains:

· a port to connect the modem

· a router

· an Ethernet hub

· a firewall

· a wireless access point

We can then connect the computers with Ethernet cables or with wireless cards. Whichever we choose, once we plug in the Wireless Access Point on, the Wi-Fi hotspot will be functional.

We will need the 802.11a standard if we are setting the network for business purposes. For home use, we can either choose the 802.11b which is the least expensive but also the slowest, or 802.11g which costs a little more but is much faster.

 
To Bill or not to Bill

This depends on the nature and size of your business. Many businesses want to set up hotspots as another value added service to attract more customers.

If you decide to charge your customers, make sure that you are choosing a Wi-Fi provider who has a built in package that helps in billing. The hotspot kit you purchase should enable you to take credit cards to your gateway. In this model, you are likely to share the revenues with the service provider, and the service provider assists in the day-to-day operations and maintenance of the hotspot.

The so-called "User-Fairness-Model" is a dynamic billing model, which allows a volume-based billing, charged only by the amount of payload (data, video, audio). Moreover, the tariff is classified by net traffic and user needs. If the net traffic increases, then the user has to pay the next higher tariff class. By the way the user is asked for if he still wishes the session also by a higher traffic class. Moreover, in time-critical applications (video, audio) a higher class fare is charged, than for non time-critical applications (such as reading Web pages, e-mail).

Hotspot 2.0

Also known as HS2 and Wi-Fi Certified Pass point, Hotspot 2.0 is a new approach to public access Wi-Fi by the Wi-Fi Alliance. The idea is for mobile devices to automatically join a Wi-Fi subscriber service whenever the user enters a Hotspot 2.0 area. The intention is to provide better bandwidth and services-on-demand to end-users, whilst also alleviating mobile carrier infrastructure of traffic overheads.

Hotspot 2.0 is based on the IEEE 802.11u standard, which is a new set of protocols to enable cellular-like roaming. If the device supports 802.11u and is subscribed to a Hotspot 2.0 service it will automatically connect and roam.

Policy Charging and Rules Function


With surging demand for broadband connectivity and bandwidth, operators are compelled to maintain a delicate equilibrium between competitively priced offers and managing network congestion and costs of the data traffic. Policy Management enables operators to address network congestion by enforcing subscriber and application usage policies. But more crucially, these policy controls also provide the means to innovate, personalize the customer experience, and to monetize data usage. Policy Charging and Rules Function (PCRF) plays an important in enforcing the Policy Management.

PCRF is the software node designated in real-time to determine policy rules in a multimedia network. As a policy tool, the PCRF plays a central role in next-generation networks. Unlike earlier policy engines that were added on to an existing network to enforce policy, the PCRF is a software component that operates at the network core and accesses subscriber databases and other specialized functions, such as a charging system, in a centralized manner. Because it operates in real time, the PCRF has an increased strategic significance and broader potential role than traditional policy engines. It is an important entity in the LTE core network domain.

The PCRF is the part of the network architecture that aggregates information to and from the network, operational support systems, and other sources (such as portals) in real time, supporting the creation of rules and then automatically making policy decisions for each subscriber active on the network. Such a network might offer multiple services, quality of service (QoS) levels, and charging rules. PCRF can provide a network agnostic solution (wire line and wireless) and can also provide multi-dimensional approach which helps in creating a lucrative and innovative platform for operators. PCRF can be integrated with different platforms like billing, rating, charging, and subscriber database or can also be deployed as a standalone operational entity.

OAuth


OAuth 2.0 is an open authentication protocol which enables applications to access each other’s data; for example, it enables a user to login to a single application (e.g. Google, Facebook Foursquare, Twitter etc.) and share the data in that application with other applications.
 
OAuth 2.0 is the next evolution of the OAuth protocol and is not backward compatible with OAuth 1.0.

Principle:

Example of how OAuth 2.0 is used to share data via applications

When a user accesses the game web application, he/she is asked to login to the game via Facebook. The user logs into Facebook and is sent back to the game. The game can now access the user’s data in Facebook, and call functions in Facebook on behalf of the user (e.g. posting status updates etc.).

OAuth 2.0 can be used either to create an application that can read user data from another application (e.g. the game in the diagram above) or an application that enables other applications to access its user data (e.g. Facebook in the example above).

 
OAuth 2.0 Roles

Resource Owner

The resource owner is the person or application that owns the data that is to be shared. With reference to the above example, the Facebook user is the resource owner.

Resource Server

The resource server is the server hosting the resource. Facebook server is the resource server in the above example.

Client Application
The client application is the application requesting access to the resources stored on the resource server. Here, the game application requesting access to the user’s Facebook account is the client application.

Authorization Server

The authorization server is the server authorizing the client app to access the resources of the resource owner. The authorization server and the resource server may or may not be the same server.

MultiSeat Desktop Virtualization


MultiSeat Desktop Virtualization is a method by which a common desktop PC, with extra keyboards, mice, and video screens directly attached to it, can be used to install, load, and concurrently run multiple operating systems. These operating systems can be the same across all "seats" or they can be different. It is similar to server based computing only in the fact that one mainframe is supporting multiple users. On the other hand, it is different because the "terminals" are composed of nothing more than the regular keyboard, monitor and mouse, and these devices are plugged directly into the PC. USB hubs can be used for cable management of the keyboards and mice, and extra video cards (typically dual or quad output) may need to be installed to handle the multiple monitors.

It is commonly known that modern day PC's are extremely powerful and have substantial excess CPU processing power. Server based computing has been around for a long time specifically to take advantage of this excess CPU power and allow multiple users to share it. However, the typical problem with this type of system is that it is dependent upon one operating system and one set of applications and there are many software titles that are not allowed to be shared among multiple users.

Virtualization is a type of server based computing. It is a method by which the "guest" operating system runs on top of, while being separated from the hardware, and can solve some of these problems. This means that multiple "guest" operating systems can be run, solving the problem of single user applications not being able to be launched for multiple, concurrent users.

Multiseat desktop virtualization is an entirely new methodology which combines the cost saving benefits and ease of maintenance of server based computing, the time savings of hardware agnostic cloning, and the capabilities of desktop virtualization, with the performance capabilities of real PC functionality. It takes advantage of multiple cores in present day CPUs to enable ordinary users to install a multiseat PC giving 2 "seats" with a dual-core CPU or 4 "seats" with a quad-core CPU. The operating system of this PC is initially installed just like a regular PC. Regular PC users can install and use this type of product without having to install servers, or know how to manage complicated, server based computing or server based virtualization products.


Type
Standard server/TCP-IP based computing
Virtualized server/TCP-IP based computing
MultiSeat Desktop Virtualization
Can run all single user applications
No
Yes
Yes
Can run multimedia without buffering
No
No
Yes
Easy to install
No
No
Yes
Each "seat" has their own IP and MAC address
No
Yes
Yes
Each "seat" cloned image is hardware agnostic across different sets of hardware
No
Yes
Yes