Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

DHCP

The Dynamic Host Configuration Protocol is used by computers for requesting Internet Protocol parameters, such as an IP address from a network server. The protocol operates based on the client-server model. DHCP is very common in all modern networks ranging in size from home networks to large campus networks and regional Internet service provider networks. Most residential network routers receive a globally unique IP address within the provider network. Within a local network, DHCP assigns a local IP address to devices connected to the local network.

When a computer or other networked device connects to a network, its DHCP client software in the operating system sends a broadcast query requesting necessary information. Any DHCP server on the network may service the request. The DHCP server manages a pool of IP addresses and information about client configuration parameters such as default gateway, domain name, the name servers, and time servers. On receiving a request, the server may respond with specific information for each client, as previously configured by an administrator, or with a specific address and any other information valid for the entire network, and the time period for which the allocation (lease) is valid. A host typically queries for this information immediately after booting, and periodically thereafter before the expiration of the information. When an assignment is refreshed by the client computer, it initially requests the same parameter values, but may be assigned a new address from the server, based on the assignment policies set by administrators.

On large networks that consist of multiple links, a single DHCP server may service the entire network when aided by DHCP relay agents located on the interconnecting routers. Such agents relay messages between DHCP clients and DHCP servers located on different subnets.

Depending on implementation, the DHCP server may have three methods of allocating IP-addresses:
·    Dynamic allocation:  A network administrator reserves a range of IP addresses for DHCP, and each client computer on the LAN is configured to request an IP address from the DHCP server during network initialization. The request-and-grant process uses a lease concept with a controllable time period, allowing the DHCP server to reclaim (and then reallocate) IP addresses that are not renewed.
·   Automatic allocation: The DHCP server permanently assigns an IP address to a requesting client from the range defined by the administrator. This is like dynamic allocation, but the DHCP server keeps a table of past IP address assignments, so that it can preferentially assign to a client the same IP address that the client previously had.
·       Static allocation: The DHCP server allocates an IP address based on a preconfigured mapping to each client's MAC address. This feature is variously called static DHCP assignment by DD-WRT, fixed-address by the dhcpd documentation, address reservation by Netgear, DHCP reservation or static DHCP by Cisco and Linksys, and IP address reservation or MAC/IP address binding by various other router manufacturers.

DHCP is used for Internet Protocol version 4 (IPv4), as well as IPv6. While both versions serve the same purpose, the details of the protocol for IPv4 and IPv6 are sufficiently different that they may be considered separate protocols.

3D Printing

Additive manufacturing or 3D printing is a process of making a three-dimensional solid object of virtually any shape from a digital model. 3D printing is achieved using an additive process, where successive layers of material are laid down in different shapes. 3D printing is also considered distinct from traditional machining techniques, which mostly rely on the removal of material by methods such as cutting or drilling (subtractive processes).
 
A 3D printer is a limited type of industrial robot that is capable of carrying out an additive process under computer control.

                                                   


 
General principles
·         Modeling: Additive manufacturing takes virtual models (3D blueprints) from computer aided design (CAD) or animation modeling software and "slices" them into digital cross-sections for the machine to successively use as a guideline for printing. Depending on the machine used, material or a binding material is deposited on the build bed or platform until material/binder layering is complete and the final 3D model has been "printed."

                                                        
 
3D model slicing
 
·         Printing: To perform a print, the machine reads the design from an stl file and lays down successive layers of liquid, powder, paper or sheet material to build the model from a series of cross sections. These layers, which correspond to the virtual cross sections from the CAD model, are joined or automatically fused to create the final shape. The primary advantage of this technique is its ability to create almost any shape or geometric feature.
 
·         Finishing: Though the printer-produced resolution is sufficient for many applications, printing a slightly oversized version of the desired object in standard resolution and then removing material with a higher-resolution subtractive process can achieve greater precision.
 
Applications
Additive manufacturing's earliest applications have been on the toolroom end of the manufacturing spectrum. Standard applications include design visualization, prototyping/CAD, metal casting, architecture, education, geospatial, healthcare, and entertainment/retail.
 
·         Industrial uses
o   Rapid prototyping: Industrial 3D printers have existed since the early 1980s and have been used extensively for rapid prototyping and research purposes. These are generally larger machines that use proprietary powdered metals, casting media (e.g. sand), plastics, paper or cartridges, and are used for rapid prototyping by universities and commercial companies.
o   Rapid manufacturing: Advances in RP technology have introduced materials that are appropriate for final manufacture, which has in turn introduced the possibility of directly manufacturing finished components. One advantage of 3D printing for rapid manufacturing lies in the relatively inexpensive production of small numbers of parts.
o   Mass customization: Companies have created services where consumers can customize objects using simplified web based customization software, and order the resulting items as 3D printed unique objects.
o   Mass production: The current slow print speed of 3D printers limits their use for mass production. To reduce this overhead, several fused filament machines now offer multiple extruder heads. These can be used to print in multiple colors, with different polymers, or to make multiple prints simultaneously.

Telnet

Telnet is a network protocol used on the Internet or local area networks to provide a bidirectional interactive text-oriented communication facility using a virtual terminal connection. User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data connection over the Transmission Control Protocol (TCP).
 
Telnet was developed in 1969 beginning with RFC 15, extended in RFC 854, and standardized as Internet Engineering Task Force (IETF) Internet Standard STD 8, one of the first Internet standards.
 
Historically, Telnet provided access to a command-line interface (usually, of an operating system) on a remote host. Most network equipment and operating systems with a TCP/IP stack support a Telnet service for remote configuration (including systems based on Windows NT).
 
The term telnet also refers to the software that implements the client part of the protocol. Telnet client applications are available for virtually all computer platforms.
 
Security
Experts in computer security, such as SANS Institute, recommend that the use of Telnet for remote logins should be discontinued under all normal circumstances, for the following reasons:
 
· Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often practical to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login, password and whatever else is typed with a packet analyzer.
· Most implementations of Telnet have no authentication that would ensure communication is carried out between the two desired hosts and not intercepted in the middle.
· Several vulnerabilities have been discovered over the years in commonly used Telnet daemons.
 
It is of note that there are a large number of industrial and scientific devices which have only Telnet available as a communication option. Some are built with only a standard RS-232 port and use a serial server hardware appliance to provide the translation between the TCP/Telnet data and the RS-232 serial data. In such cases, SSH is not an option unless the interface appliance can be configured for SSH.

Bluetooth LE

Bluetooth low energy, Bluetooth LE, or BLE, marketed as Bluetooth Smart, is a wireless personal area network technology aimed at novel applications in the healthcare, fitness, security, and home entertainment industries. Compared to "Classic" Bluetooth, BLE is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range.
Mobile operating systems including iOS, Android, Windows Phone and BlackBerry, as well as OS X and Windows 8, natively support Bluetooth low energy. The Bluetooth SIG predicts more than 90 percent of Bluetooth-enabled smartphones will support the low energy standard by 2018.
Bluetooth low energy is not backward-compatible with the previous, often called Classic, Bluetooth protocol. The Bluetooth 4.0 specification permits devices to implement either or both of the LE and Classic systems. Those that implement both are known as Bluetooth 4.0 dual-mode devices.
Technical Details
Bluetooth low energy technology operates in the same spectrum range (the 2.400 GHz-2.4835 GHz ISM band) as Classic Bluetooth technology, but uses a different set of channels. Instead of Bluetooth's 79 1-MHz channels, Bluetooth low energy technology has 40 2-MHz channels. Within a channel, data is transmitted using Gaussian frequency shift modulation, similar to Classic Bluetooth's Basic Rate scheme. The bit rate is 1Mbit/s, and the maximum transmit power is 10 mW.
Bluetooth low energy technology uses frequency hopping to counteract narrowband interference problems. Classic Bluetooth also uses frequency hopping but the details are different; as a result, while both FCC and ETSI classify Bluetooth technology as a Frequency-hopping spread spectrum scheme, Bluetooth low energy technology is classified as a system using digital modulation techniques or a direct-sequence spread spectrum.
Software model
All Bluetooth low energy devices use the Generic Attribute Profile (GATT). GATT has the following terminology:
·         Client - A device that initiates GATT commands and requests, and accepts responses, for example a computer or smartphone.
·         Server - A device that receives GATT commands and requests, and returns responses, for example a temperature sensor.
·         Characteristic - A data value transferred between client and server, for example the current battery voltage.
·         Service - A collection of related characteristics, which operate together to perform a particular function. For instance, the Health Thermometer service includes characteristics for a temperature measurement value, and a time interval between measurements.
·         Descriptor - A descriptor provides additional information about a characteristic. For instance, a temperature value characteristic may have an indication of its units (e.g. Celsius), and the maximum and minimum values which the sensor can measure.
Services, characteristics, and descriptors are collectively referred to as attributes and identified by UUIDs (Universally unique identifiers).
The GATT protocol provides a number of commands for the client to discover information about the server. These include:
·         Discover UUIDs for all primary services
·         Find a service with a given UUID
·         Find secondary services for a given primary service
·         Discover all characteristics for a given service
·         Find characteristics matching a given UUID
·         Read all descriptors for a particular characteristic
 Applications:
Borrowing from the original Bluetooth specification, the Bluetooth SIG defines several profiles — specifications for how a device works in a particular application — for low energy devices.
All current low energy application profiles are based on the generic attribute profile or GATT.
·         Health care profiles
o   HTP — for medical temperature measurement devices
o   GLP — for blood glucose monitors
o   BLP — for blood pressure measurement
·         Sports and fitness profiles
o   HRP — for devices which measure heart rate
o   CSCP — for sensors attached to a bicycle or exercise bike to measure cadence and wheel speed
o   RSCP — running speed and cadence profile
o   CPP — cycling power profile
o   LNP — location and navigation profile
·         Proximity sensing
o   FMP — the "find me" profile — allows one device to issue an alert on a second misplaced device.
o   PXP — the proximity profile — allows a proximity monitor to detect whether a proximity reporter is within a close range.
·         Alerts and time profiles
o   The phone alert status profile and alert notification profile allow a client device to receive notifications such as incoming call alerts from another device.
o   The time profile allows current time and time zone information on a client device to be set from a server device, such as between a wristwatch and a mobile phone's network time.

Network forensics


Network forensics is a sub-branch of digital forensics relating to the monitoring and analysis of computer network traffic for the purposes of information gathering, legal evidence, or intrusion detection. Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information, making network forensics often a pro-active investigation.

Network forensics generally has two uses:

The first, relating to security, involves monitoring a network for anomalous traffic and identifying intrusions. An attacker might be able to erase all log files on a compromised host; network-based evidence might therefore be the only evidence available for forensic analysis. 

The second form of Network forensics relates to law enforcement. In this case analysis of captured network traffic can include tasks such as reassembling transferred files, searching for keywords and parsing human communication such as emails or chat sessions. 

Two systems are commonly used to collect network data:

"Catch-it-as-you-can" - This is where all packets passing through certain traffic point are captured and written to large storage with analysis being done subsequently in batch mode. 

"Stop, look and listen" - This is where each packet is analyzed by a faster processor in a rudimentary way in memory and only certain information saved for future analysis.

Types
Ethernet – Applying forensic methods on the Ethernet layer is done by eavesdropping bit streams with tools called monitoring tools or sniffers. The most common tool on this layer is Wireshark (formerly known as Ethereal). It collects all data on this layer and allows the user to filter for different events. With these tools, websites, email attachments and more that have been transmitted over the network can be reconstructed. An advantage of collecting this data is that it is directly connected to a host. If, for example, the IP address or the MAC address of a host at a certain time is known, all data for or from this IP or MAC address can be filtered.

 TCP/IP – For the correct routing of packets through the network (e.g., the Internet), every intermediate router must have a routing table which is the best source of information if investigating a digital crime. To do this, it is necessary to reverse the sending route of the attacker, follow the packets, and find where the computer the packet came from (i.e., the source of the attacker).

Another source of evidence on this layer is authentication logs. They show which account and which user was associated with an activity and may reveal who was the attacker or at least sets limits to the people who come into consideration of being the attacker.

The Internet – The internet can be a rich source of digital evidence including web browsing, email, newsgroup, synchronous chat and peer-to-peer traffic.

Wireless forensics is a sub-discipline of network forensics. The main goal of wireless forensics is to provide the methodology and tools required to collect and analyze (wireless) network traffic that can be presented as valid digital evidence in a court of law. The evidence collected can correspond to plain data or, with the broad usage of Voice-over-IP (VoIP) technologies, especially over wireless, can include voice conversations.
 

Second Screen or Multi Screen


Second screen, sometimes also referred to as "companion device" (or "companion apps" when referring to a software applications), is a term that refers to an additional electronic device (e.g. tablet, smartphone) that allows a television audience to interact with the content they are consuming, such as TV shows, movies, music, or video games. Extra data is displayed on a portable device synchronized with the content being viewed on television.

Several studies show a clear tendency of the consumer to use a device while watching television. They show high use of tablet or smartphone when watching television, and indicate a high percentage of comments or posts on social networks being about the content that's being watched.

Based on these studies, many companies both in content production and advertising have adapted their delivery content to the lifestyle of the consumer in order to get maximum attention and thus profits. Applications are becoming a natural extension of television programming, both live and on demand.

Applications

Many applications in the "second screen" are designed to give the consumer another way of interactivity. They also give the media companies another way to sell advertising content. Some examples:

·         Transmission of the Master's Golf Tournament, application for the iPhone (rating information and publicity)
·         TV programs broadcast live tweets and comment.
·         Synchronization of audiovisual content via web advertising.
·         Applications that extend the content information.
·         Shows that add on their websites, content devoted exclusively to the second screen.
·         Applications that synchronize the content being viewed to the portable device.
·         Video game console playing with extra data, such as a map or strategy data that synchronize with the content being viewed to the portable device.
·         TV discovery application with recommendation, EPG (live content), personalization.

Sports Broadcasting

Sports broadcasters, to stem the flight of the TV audience away from watching the main screen (new name for the television) to the second screen, are offering alternative and enhanced content to the main program. The idea is to present content related to the main program, such as unseen moments, alternative information, soundtrack, and characters. New technologies allow the viewer to see different camera angles while watching the game.
            

iBurst


Burst (or HC-SDMA, High Capacity Spatial Division Multiple Access) is a wireless broadband technology which optimizes the use of its bandwidth with the help of smart antennas.

Description

HC-SDMA was announced as considered by ISO TC204 WG16 for the continuous communications standards architecture, known as Communications, Air-interface, Long and Medium range (CALM), which ISO is developing for intelligent transport systems (ITS). ITS may include applications for public safety, network congestion management during traffic incidents, automatic toll booths, and more.

The HC-SDMA interface provides wide-area broadband wireless data-connectivity for fixed, portable and mobile computing devices and appliances. The protocol is designed to be implemented with smart antenna array techniques (called MIMO for multiple-input multiple-output) to substantially improve the radio frequency (RF) coverage, capacity and performance for the system.

Technology

The HC-SDMA interface operates on a similar premise as cellular phones, with hand-offs between HC-SDMA cells repeatedly providing the user with a seamless wireless Internet access even when moving at the speed of a car or train.

The protocol:

·         specifies base station and client device RF characteristics, including output power levels, transmit frequencies and timing error, pulse shaping, in-band and out-of band spurious emissions, receiver sensitivity and selectivity;

·         defines associated frame structures for the various burst types including standard uplink and downlink traffic, paging and broadcast burst types;

·         specifies the modulation, forward error correction, interleaving and scrambling for various burst types;

·         describes the various logical channels (broadcast, paging, random access, configuration and traffic channels) and their roles in establishing communication over the radio link; and

·         specifies procedures for error recovery and retry.

The protocol also supports Layer 3 (L3) mechanisms for creating and controlling logical connections (sessions) between client device and base including registration, stream start, power control, handover, link adaptation, and stream closure, as well as L3 mechanisms for client device authentication and secure transmission on the data links.

Usage

Various options are already commercially available using:

·         Desktop modem with USB and Ethernet ports (with external power supply)
·         Portable USB modem (using USB power supply)
·         Laptop modem (PC card)
·         Wireless Residential Gateway
·         Mobile Broadband Router

Assisted GPS


Assisted GPS, generally abbreviated as A-GPS or aGPS, is a system that can under certain conditions improve the startup performance, or time-to-first-fix (TTFF), of a GPS satellite-based positioning system. It is used extensively with GPS-capable cellular phones to make the location of a cell phone available to emergency call dispatchers.

"Standalone" or "autonomous" GPS operation uses radio signals from satellites alone. In very poor signal conditions, for example in a city, these signals may suffer multipath propagation where signals bounce off buildings, or are weakened by passing through atmospheric conditions, walls, or tree cover. When first turned on in these conditions, some standalone GPS navigation devices may not be able to fix a position due to the fragmentary signal, rendering them unable to function until a clearer signal can be received continuously for a long enough period of time.

An assisted GPS system can address these problems by using data available from a network to locate and use the satellites in poor signal conditions. For billing purposes, network providers often count this as a data access, which can cost money depending on the plan.

Basic Concepts
Standalone GPS provides first position in approximately 30–40 seconds. A Standalone GPS system needs orbital information of the satellites to calculate the current position. The data rate of the satellite signal is only 50 bits/s, so downloading orbital information like ephemeris and almanac directly from satellites typically takes a long time, and if the satellite signals are lost during the acquisition of this information, it is discarded and the standalone system has to start from scratch. In AGPS, the Network Operator deploys an AGPS server. These AGPS servers download the orbital information from the satellite and store it in the database. An AGPS capable device can connect to these servers and download this information using Mobile Network radio bearers such as GSM, CDMA, WCDMA, LTE or even using other wireless radio bearers such as Wi-Fi. Usually the data rate of these bearers is high; hence downloading orbital information takes less time.

AGPS has two modes of operation:

Mobile Station Assisted (MSA)

In MSA mode A-GPS operation, the A-GPS capable device receives acquisition assistance, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and sends the measurements to the A-GPS server. The A-GPS server calculates the position and sends it back to the A-GPS device.

Mobile Station Based (MSB)

In MSB mode A-GPS operation, the A-GPS device receives ephemeris, reference location, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and calculates the position.

Many mobile phones combine A-GPS and other location services including Wi-Fi Positioning System and cell-site multilateration and sometimes a hybrid positioning system.

Wireless Ad hoc


A wireless ad hoc network is a decentralized type of wireless network. The network is ad hoc because it does not rely on pre-existing routing, instead, each node participates in routing by forwarding data to other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data.

An ad hoc network typically refers to any set of networks where all devices have equal status on a network and are free to associate with any other ad hoc network devices in link range. Very often, ad hoc network refers to a mode of operation of IEEE 802.11 wireless networks.

Application
The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes can't be relied on, and may improve the scalability of networks compared to wireless managed networks.

Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like natural disasters or military conflicts. The presence of dynamic and adaptive routing protocols enables ad hoc networks to be formed quickly.

Wireless ad hoc networks can be further classified by their application:
·         mobile ad hoc networks (MANET)
·         wireless mesh networks (WMN)
·         wireless sensor networks (WSN)

Technical requirements

An ad hoc network is made up of multiple “nodes” connected by “links”.

Links are influenced by the node's resources (e.g. transmitter power, computing power and memory) and by behavioral properties (e.g. reliability), as well as by link properties (e.g. length-of-link and signal loss, interference and noise). Since links can be connected or disconnected at any time, a functioning network must be able to cope with this dynamic restructuring, preferably in a way that is timely, efficient, reliable, robust and scalable.

The network must allow any two nodes to communicate, by relaying the information via other nodes. A “path” is a series of links that connects two nodes. Various routing methods use one or two paths between any two nodes; flooding methods use all or most of the available paths.

Remote Radio Head


A remote radio head is an operator radio control panel that connects to a remote radio transceiver via electrical or wireless interface. When used to describe aircraft radio cockpit radio systems, this control panel is often called the radio head.

Current and future generations of wireless cellular systems feature heavy use of Remote Radio Heads (RRHs) in the base stations. Instead of hosting a bulky base station controller close to the top of antenna towers, new wireless networks connect the base station controller and remote radio heads through lossless optical fibers. The interface protocol that enables such a distributed architecture is called Common Publish Radio Interface (CPRI). With this new architecture, RRHs offload intermediate frequency (IF) and radio frequency (RF) processing from the base station. Furthermore, the base station and RF antennas can be physically separated by a considerable distance, providing much needed system deployment flexibility.

Typical advanced processing algorithms on RRHs include digital up-conversion and digital down-conversion (DUC and DDC), crest factor reduction (CFR), and digital pre-distortion (DPD). DUC interpolates base band data to a much higher sample rate via a cascade of interpolation filters. It further mixes the complex data channels with IF carrier signals so that RF modulation can be simplified. CFR reduces the peak-to-average power ratio of the data so it does not enter the non-linear region of the RF power amplifier. DPD estimates the distortion caused by the non-linear effect of the power amplifier and pre-compensates the data.

More importantly, many wireless standards demand re-configurability in both the base station and the RRH. For example, the 3GPP Long Term Evolution (LTE) and WiMax systems both feature scalable bandwidth. The RRH should be able to adjust – at run time – the bandwidth selection, the number of channels, the incoming data rate, among many other things.

RRH system model

Typically, a base station connects to a RRH via optical cables. On the downlink direction, base band data is transported to the RRH via CPRI links. The data is then up-converted to IF sample rates, preprocessed by CFR or DPD to mitigate non-linear effects of broadband power amplifiers, and eventually sent for radio transmission. A typical system is shown in Figure 1.

Remote Radio Head


A remote radio head is an operator radio control panel that connects to a remote radio transceiver via electrical or wireless interface. When used to describe aircraft radio cockpit radio systems, this control panel is often called the radio head. 

Current and future generations of wireless cellular systems feature heavy use of Remote Radio Heads (RRHs) in the base stations. Instead of hosting a bulky base station controller close to the top of antenna towers, new wireless networks connect the base station controller and remote radio heads through lossless optical fibers. The interface protocol that enables such a distributed architecture is called Common Publish Radio Interface (CPRI). With this new architecture, RRHs offload intermediate frequency (IF) and radio frequency (RF) processing from the base station. Furthermore, the base station and RF antennas can be physically separated by a considerable distance, providing much needed system deployment flexibility.

Typical advanced processing algorithms on RRHs include digital up-conversion and digital down-conversion (DUC and DDC), crest factor reduction (CFR), and digital pre-distortion (DPD). DUC interpolates base band data to a much higher sample rate via a cascade of interpolation filters. It further mixes the complex data channels with IF carrier signals so that RF modulation can be simplified. CFR reduces the peak-to-average power ratio of the data so it does not enter the non-linear region of the RF power amplifier. DPD estimates the distortion caused by the non-linear effect of the power amplifier and pre-compensates the data.

More importantly, many wireless standards demand re-configurability in both the base station and the RRH. For example, the 3GPP Long Term Evolution (LTE) and WiMax systems both feature scalable bandwidth. The RRH should be able to adjust – at run time – the bandwidth selection, the number of channels, the incoming data rate, among many other things.

RRH system model

Typically, a base station connects to a RRH via optical cables. On the downlink direction, base band data is transported to the RRH via CPRI links. The data is then up-converted to IF sample rates, preprocessed by CFR or DPD to mitigate non-linear effects of broadband power amplifiers, and eventually sent for radio transmission.

Hotspot Wi - Fi


Wireless internet really made the technological life easy and convenient. There are different methods or technologies to use the wireless internet everywhere and continue our regular and important work related to internet technology. One of these technologies is the Wi-Fi hot spots.  

A hotspot is a site that offers Internet access over a wireless local area network through the use of a router connected to a link to an Internet service provider. Hotspots typically use Wi-Fi technology. With the help of our mobile devices we can access the wireless network through hot spots from coffee shops, malls etc.

To set up a hotspot, all we need is

1. a hotspot kit (hardware, software and remote monitoring device)

2. a high speed internet connection (DSL, T1 or DS3)
 
The type of hotspot kit depends on whether we need a Single Access Point or Multiple Access Point. If we are going for multiple access points, the area where we want to deploy the network has to be considered – whether it is a multi-storey building or a medium sized hotel.

Setting up a Hotspot

If we already have a network held together by Ethernet and now want to upgrade to a wireless hotspot, we need to purchase a Wireless Access Point and join it with the Ethernet network.

If we are starting from the scratch, what we need is a Wireless Access Point Router. This kit contains:

· a port to connect the modem

· a router

· an Ethernet hub

· a firewall

· a wireless access point

We can then connect the computers with Ethernet cables or with wireless cards. Whichever we choose, once we plug in the Wireless Access Point on, the Wi-Fi hotspot will be functional.

We will need the 802.11a standard if we are setting the network for business purposes. For home use, we can either choose the 802.11b which is the least expensive but also the slowest, or 802.11g which costs a little more but is much faster.

 
To Bill or not to Bill

This depends on the nature and size of your business. Many businesses want to set up hotspots as another value added service to attract more customers.

If you decide to charge your customers, make sure that you are choosing a Wi-Fi provider who has a built in package that helps in billing. The hotspot kit you purchase should enable you to take credit cards to your gateway. In this model, you are likely to share the revenues with the service provider, and the service provider assists in the day-to-day operations and maintenance of the hotspot.

The so-called "User-Fairness-Model" is a dynamic billing model, which allows a volume-based billing, charged only by the amount of payload (data, video, audio). Moreover, the tariff is classified by net traffic and user needs. If the net traffic increases, then the user has to pay the next higher tariff class. By the way the user is asked for if he still wishes the session also by a higher traffic class. Moreover, in time-critical applications (video, audio) a higher class fare is charged, than for non time-critical applications (such as reading Web pages, e-mail).

Hotspot 2.0

Also known as HS2 and Wi-Fi Certified Pass point, Hotspot 2.0 is a new approach to public access Wi-Fi by the Wi-Fi Alliance. The idea is for mobile devices to automatically join a Wi-Fi subscriber service whenever the user enters a Hotspot 2.0 area. The intention is to provide better bandwidth and services-on-demand to end-users, whilst also alleviating mobile carrier infrastructure of traffic overheads.

Hotspot 2.0 is based on the IEEE 802.11u standard, which is a new set of protocols to enable cellular-like roaming. If the device supports 802.11u and is subscribed to a Hotspot 2.0 service it will automatically connect and roam.