Showing posts with label Data. Show all posts
Showing posts with label Data. Show all posts

Beacons

A beacon is an intentionally conspicuous device designed to attract attention to a specific location.
 
Beacons can also be combined with semaphoric or other indicators to provide important information, such as the status of an airport, by the colour and rotational pattern of its airport beacon or of pending weather as indicated on a weather beacon mounted at the top of a tall building or similar site. When used in such fashion, beacons can be considered a form of optical telegraphy.
 
Usage
· For Navigation  Beacons help guide navigators to their destinations. Types of navigational beacons include radar reflectors, radio beacons, sonic and visual signals.
· For defensive communications – Classically, beacons were fires lit at well-known locations on hills or high places, used either as lighthouses for navigation at sea, or for signalling over land that enemy troops were approaching, in order to alert defenses. As signals, beacons are an ancient form of optical telegraphy, and were part of a relay league.
· On vehicles – Vehicular beacons are rotating or flashing lights affixed to the top of a vehicle to attract the attention of surrounding vehicles and pedestrians. Emergency vehicles such as fire engines, ambulances, police cars, tow trucks, construction vehicles, and snow-removal vehicles carry beacon lights.
 
In wireless networks, a beacon is a type of frame which is sent by the access point (or wifi router), to indicate that is on.
 
Beaconing is the process that allows a network to self-repair network problems. The stations on the network notify the other stations on the ring when they are not receiving the transmissions. Beaconing is used in Token ring and FDDI networks.

Telnet

Telnet is a network protocol used on the Internet or local area networks to provide a bidirectional interactive text-oriented communication facility using a virtual terminal connection. User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data connection over the Transmission Control Protocol (TCP).
 
Telnet was developed in 1969 beginning with RFC 15, extended in RFC 854, and standardized as Internet Engineering Task Force (IETF) Internet Standard STD 8, one of the first Internet standards.
 
Historically, Telnet provided access to a command-line interface (usually, of an operating system) on a remote host. Most network equipment and operating systems with a TCP/IP stack support a Telnet service for remote configuration (including systems based on Windows NT).
 
The term telnet also refers to the software that implements the client part of the protocol. Telnet client applications are available for virtually all computer platforms.
 
Security
Experts in computer security, such as SANS Institute, recommend that the use of Telnet for remote logins should be discontinued under all normal circumstances, for the following reasons:
 
· Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often practical to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login, password and whatever else is typed with a packet analyzer.
· Most implementations of Telnet have no authentication that would ensure communication is carried out between the two desired hosts and not intercepted in the middle.
· Several vulnerabilities have been discovered over the years in commonly used Telnet daemons.
 
It is of note that there are a large number of industrial and scientific devices which have only Telnet available as a communication option. Some are built with only a standard RS-232 port and use a serial server hardware appliance to provide the translation between the TCP/Telnet data and the RS-232 serial data. In such cases, SSH is not an option unless the interface appliance can be configured for SSH.

Network forensics


Network forensics is a sub-branch of digital forensics relating to the monitoring and analysis of computer network traffic for the purposes of information gathering, legal evidence, or intrusion detection. Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information, making network forensics often a pro-active investigation.

Network forensics generally has two uses:

The first, relating to security, involves monitoring a network for anomalous traffic and identifying intrusions. An attacker might be able to erase all log files on a compromised host; network-based evidence might therefore be the only evidence available for forensic analysis. 

The second form of Network forensics relates to law enforcement. In this case analysis of captured network traffic can include tasks such as reassembling transferred files, searching for keywords and parsing human communication such as emails or chat sessions. 

Two systems are commonly used to collect network data:

"Catch-it-as-you-can" - This is where all packets passing through certain traffic point are captured and written to large storage with analysis being done subsequently in batch mode. 

"Stop, look and listen" - This is where each packet is analyzed by a faster processor in a rudimentary way in memory and only certain information saved for future analysis.

Types
Ethernet – Applying forensic methods on the Ethernet layer is done by eavesdropping bit streams with tools called monitoring tools or sniffers. The most common tool on this layer is Wireshark (formerly known as Ethereal). It collects all data on this layer and allows the user to filter for different events. With these tools, websites, email attachments and more that have been transmitted over the network can be reconstructed. An advantage of collecting this data is that it is directly connected to a host. If, for example, the IP address or the MAC address of a host at a certain time is known, all data for or from this IP or MAC address can be filtered.

 TCP/IP – For the correct routing of packets through the network (e.g., the Internet), every intermediate router must have a routing table which is the best source of information if investigating a digital crime. To do this, it is necessary to reverse the sending route of the attacker, follow the packets, and find where the computer the packet came from (i.e., the source of the attacker).

Another source of evidence on this layer is authentication logs. They show which account and which user was associated with an activity and may reveal who was the attacker or at least sets limits to the people who come into consideration of being the attacker.

The Internet – The internet can be a rich source of digital evidence including web browsing, email, newsgroup, synchronous chat and peer-to-peer traffic.

Wireless forensics is a sub-discipline of network forensics. The main goal of wireless forensics is to provide the methodology and tools required to collect and analyze (wireless) network traffic that can be presented as valid digital evidence in a court of law. The evidence collected can correspond to plain data or, with the broad usage of Voice-over-IP (VoIP) technologies, especially over wireless, can include voice conversations.
 

Second Screen or Multi Screen


Second screen, sometimes also referred to as "companion device" (or "companion apps" when referring to a software applications), is a term that refers to an additional electronic device (e.g. tablet, smartphone) that allows a television audience to interact with the content they are consuming, such as TV shows, movies, music, or video games. Extra data is displayed on a portable device synchronized with the content being viewed on television.

Several studies show a clear tendency of the consumer to use a device while watching television. They show high use of tablet or smartphone when watching television, and indicate a high percentage of comments or posts on social networks being about the content that's being watched.

Based on these studies, many companies both in content production and advertising have adapted their delivery content to the lifestyle of the consumer in order to get maximum attention and thus profits. Applications are becoming a natural extension of television programming, both live and on demand.

Applications

Many applications in the "second screen" are designed to give the consumer another way of interactivity. They also give the media companies another way to sell advertising content. Some examples:

·         Transmission of the Master's Golf Tournament, application for the iPhone (rating information and publicity)
·         TV programs broadcast live tweets and comment.
·         Synchronization of audiovisual content via web advertising.
·         Applications that extend the content information.
·         Shows that add on their websites, content devoted exclusively to the second screen.
·         Applications that synchronize the content being viewed to the portable device.
·         Video game console playing with extra data, such as a map or strategy data that synchronize with the content being viewed to the portable device.
·         TV discovery application with recommendation, EPG (live content), personalization.

Sports Broadcasting

Sports broadcasters, to stem the flight of the TV audience away from watching the main screen (new name for the television) to the second screen, are offering alternative and enhanced content to the main program. The idea is to present content related to the main program, such as unseen moments, alternative information, soundtrack, and characters. New technologies allow the viewer to see different camera angles while watching the game.
            

iBurst


Burst (or HC-SDMA, High Capacity Spatial Division Multiple Access) is a wireless broadband technology which optimizes the use of its bandwidth with the help of smart antennas.

Description

HC-SDMA was announced as considered by ISO TC204 WG16 for the continuous communications standards architecture, known as Communications, Air-interface, Long and Medium range (CALM), which ISO is developing for intelligent transport systems (ITS). ITS may include applications for public safety, network congestion management during traffic incidents, automatic toll booths, and more.

The HC-SDMA interface provides wide-area broadband wireless data-connectivity for fixed, portable and mobile computing devices and appliances. The protocol is designed to be implemented with smart antenna array techniques (called MIMO for multiple-input multiple-output) to substantially improve the radio frequency (RF) coverage, capacity and performance for the system.

Technology

The HC-SDMA interface operates on a similar premise as cellular phones, with hand-offs between HC-SDMA cells repeatedly providing the user with a seamless wireless Internet access even when moving at the speed of a car or train.

The protocol:

·         specifies base station and client device RF characteristics, including output power levels, transmit frequencies and timing error, pulse shaping, in-band and out-of band spurious emissions, receiver sensitivity and selectivity;

·         defines associated frame structures for the various burst types including standard uplink and downlink traffic, paging and broadcast burst types;

·         specifies the modulation, forward error correction, interleaving and scrambling for various burst types;

·         describes the various logical channels (broadcast, paging, random access, configuration and traffic channels) and their roles in establishing communication over the radio link; and

·         specifies procedures for error recovery and retry.

The protocol also supports Layer 3 (L3) mechanisms for creating and controlling logical connections (sessions) between client device and base including registration, stream start, power control, handover, link adaptation, and stream closure, as well as L3 mechanisms for client device authentication and secure transmission on the data links.

Usage

Various options are already commercially available using:

·         Desktop modem with USB and Ethernet ports (with external power supply)
·         Portable USB modem (using USB power supply)
·         Laptop modem (PC card)
·         Wireless Residential Gateway
·         Mobile Broadband Router

Assisted GPS


Assisted GPS, generally abbreviated as A-GPS or aGPS, is a system that can under certain conditions improve the startup performance, or time-to-first-fix (TTFF), of a GPS satellite-based positioning system. It is used extensively with GPS-capable cellular phones to make the location of a cell phone available to emergency call dispatchers.

"Standalone" or "autonomous" GPS operation uses radio signals from satellites alone. In very poor signal conditions, for example in a city, these signals may suffer multipath propagation where signals bounce off buildings, or are weakened by passing through atmospheric conditions, walls, or tree cover. When first turned on in these conditions, some standalone GPS navigation devices may not be able to fix a position due to the fragmentary signal, rendering them unable to function until a clearer signal can be received continuously for a long enough period of time.

An assisted GPS system can address these problems by using data available from a network to locate and use the satellites in poor signal conditions. For billing purposes, network providers often count this as a data access, which can cost money depending on the plan.

Basic Concepts
Standalone GPS provides first position in approximately 30–40 seconds. A Standalone GPS system needs orbital information of the satellites to calculate the current position. The data rate of the satellite signal is only 50 bits/s, so downloading orbital information like ephemeris and almanac directly from satellites typically takes a long time, and if the satellite signals are lost during the acquisition of this information, it is discarded and the standalone system has to start from scratch. In AGPS, the Network Operator deploys an AGPS server. These AGPS servers download the orbital information from the satellite and store it in the database. An AGPS capable device can connect to these servers and download this information using Mobile Network radio bearers such as GSM, CDMA, WCDMA, LTE or even using other wireless radio bearers such as Wi-Fi. Usually the data rate of these bearers is high; hence downloading orbital information takes less time.

AGPS has two modes of operation:

Mobile Station Assisted (MSA)

In MSA mode A-GPS operation, the A-GPS capable device receives acquisition assistance, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and sends the measurements to the A-GPS server. The A-GPS server calculates the position and sends it back to the A-GPS device.

Mobile Station Based (MSB)

In MSB mode A-GPS operation, the A-GPS device receives ephemeris, reference location, reference time and other optional assistance data from the A-GPS server. With the help of the above data, the A-GPS device receives signals from the visible satellites and calculates the position.

Many mobile phones combine A-GPS and other location services including Wi-Fi Positioning System and cell-site multilateration and sometimes a hybrid positioning system.

Wireless Ad hoc


A wireless ad hoc network is a decentralized type of wireless network. The network is ad hoc because it does not rely on pre-existing routing, instead, each node participates in routing by forwarding data to other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data.

An ad hoc network typically refers to any set of networks where all devices have equal status on a network and are free to associate with any other ad hoc network devices in link range. Very often, ad hoc network refers to a mode of operation of IEEE 802.11 wireless networks.

Application
The decentralized nature of wireless ad hoc networks makes them suitable for a variety of applications where central nodes can't be relied on, and may improve the scalability of networks compared to wireless managed networks.

Minimal configuration and quick deployment make ad hoc networks suitable for emergency situations like natural disasters or military conflicts. The presence of dynamic and adaptive routing protocols enables ad hoc networks to be formed quickly.

Wireless ad hoc networks can be further classified by their application:
·         mobile ad hoc networks (MANET)
·         wireless mesh networks (WMN)
·         wireless sensor networks (WSN)

Technical requirements

An ad hoc network is made up of multiple “nodes” connected by “links”.

Links are influenced by the node's resources (e.g. transmitter power, computing power and memory) and by behavioral properties (e.g. reliability), as well as by link properties (e.g. length-of-link and signal loss, interference and noise). Since links can be connected or disconnected at any time, a functioning network must be able to cope with this dynamic restructuring, preferably in a way that is timely, efficient, reliable, robust and scalable.

The network must allow any two nodes to communicate, by relaying the information via other nodes. A “path” is a series of links that connects two nodes. Various routing methods use one or two paths between any two nodes; flooding methods use all or most of the available paths.

Policy Charging and Rules Function


With surging demand for broadband connectivity and bandwidth, operators are compelled to maintain a delicate equilibrium between competitively priced offers and managing network congestion and costs of the data traffic. Policy Management enables operators to address network congestion by enforcing subscriber and application usage policies. But more crucially, these policy controls also provide the means to innovate, personalize the customer experience, and to monetize data usage. Policy Charging and Rules Function (PCRF) plays an important in enforcing the Policy Management.

PCRF is the software node designated in real-time to determine policy rules in a multimedia network. As a policy tool, the PCRF plays a central role in next-generation networks. Unlike earlier policy engines that were added on to an existing network to enforce policy, the PCRF is a software component that operates at the network core and accesses subscriber databases and other specialized functions, such as a charging system, in a centralized manner. Because it operates in real time, the PCRF has an increased strategic significance and broader potential role than traditional policy engines. It is an important entity in the LTE core network domain.

The PCRF is the part of the network architecture that aggregates information to and from the network, operational support systems, and other sources (such as portals) in real time, supporting the creation of rules and then automatically making policy decisions for each subscriber active on the network. Such a network might offer multiple services, quality of service (QoS) levels, and charging rules. PCRF can provide a network agnostic solution (wire line and wireless) and can also provide multi-dimensional approach which helps in creating a lucrative and innovative platform for operators. PCRF can be integrated with different platforms like billing, rating, charging, and subscriber database or can also be deployed as a standalone operational entity.

OAuth


OAuth 2.0 is an open authentication protocol which enables applications to access each other’s data; for example, it enables a user to login to a single application (e.g. Google, Facebook Foursquare, Twitter etc.) and share the data in that application with other applications.
 
OAuth 2.0 is the next evolution of the OAuth protocol and is not backward compatible with OAuth 1.0.

Principle:

Example of how OAuth 2.0 is used to share data via applications

When a user accesses the game web application, he/she is asked to login to the game via Facebook. The user logs into Facebook and is sent back to the game. The game can now access the user’s data in Facebook, and call functions in Facebook on behalf of the user (e.g. posting status updates etc.).

OAuth 2.0 can be used either to create an application that can read user data from another application (e.g. the game in the diagram above) or an application that enables other applications to access its user data (e.g. Facebook in the example above).

 
OAuth 2.0 Roles

Resource Owner

The resource owner is the person or application that owns the data that is to be shared. With reference to the above example, the Facebook user is the resource owner.

Resource Server

The resource server is the server hosting the resource. Facebook server is the resource server in the above example.

Client Application
The client application is the application requesting access to the resources stored on the resource server. Here, the game application requesting access to the user’s Facebook account is the client application.

Authorization Server

The authorization server is the server authorizing the client app to access the resources of the resource owner. The authorization server and the resource server may or may not be the same server.

MultiSeat Desktop Virtualization


MultiSeat Desktop Virtualization is a method by which a common desktop PC, with extra keyboards, mice, and video screens directly attached to it, can be used to install, load, and concurrently run multiple operating systems. These operating systems can be the same across all "seats" or they can be different. It is similar to server based computing only in the fact that one mainframe is supporting multiple users. On the other hand, it is different because the "terminals" are composed of nothing more than the regular keyboard, monitor and mouse, and these devices are plugged directly into the PC. USB hubs can be used for cable management of the keyboards and mice, and extra video cards (typically dual or quad output) may need to be installed to handle the multiple monitors.

It is commonly known that modern day PC's are extremely powerful and have substantial excess CPU processing power. Server based computing has been around for a long time specifically to take advantage of this excess CPU power and allow multiple users to share it. However, the typical problem with this type of system is that it is dependent upon one operating system and one set of applications and there are many software titles that are not allowed to be shared among multiple users.

Virtualization is a type of server based computing. It is a method by which the "guest" operating system runs on top of, while being separated from the hardware, and can solve some of these problems. This means that multiple "guest" operating systems can be run, solving the problem of single user applications not being able to be launched for multiple, concurrent users.

Multiseat desktop virtualization is an entirely new methodology which combines the cost saving benefits and ease of maintenance of server based computing, the time savings of hardware agnostic cloning, and the capabilities of desktop virtualization, with the performance capabilities of real PC functionality. It takes advantage of multiple cores in present day CPUs to enable ordinary users to install a multiseat PC giving 2 "seats" with a dual-core CPU or 4 "seats" with a quad-core CPU. The operating system of this PC is initially installed just like a regular PC. Regular PC users can install and use this type of product without having to install servers, or know how to manage complicated, server based computing or server based virtualization products.


Type
Standard server/TCP-IP based computing
Virtualized server/TCP-IP based computing
MultiSeat Desktop Virtualization
Can run all single user applications
No
Yes
Yes
Can run multimedia without buffering
No
No
Yes
Easy to install
No
No
Yes
Each "seat" has their own IP and MAC address
No
Yes
Yes
Each "seat" cloned image is hardware agnostic across different sets of hardware
No
Yes
Yes



Web Feed


A web feed (or news feed, or syndicated feed) is a data format used for providing users with frequently updated content. Content distributors syndicate a web feed, thereby allowing users to subscribe to it. Making a collection of web feeds accessible in one spot is known as aggregation, which is performed by client software called an aggregator (also called a feed reader or a news reader), which can be web-based, desktop-based, or mobile-device-based.

Technically, a web feed is a document (often XML-based) whose discrete content items include web links to the source of the content. News websites and blogs are common sources for web feeds, but feeds are also used to deliver structured information ranging from weather data to top-ten lists of hit tunes to search results. The two main web feed formats are RSS and Atom.

A typical scenario of web feed use is: a content provider publishes a feed link on their site which end users can register with an aggregator program running on their own machines. Aggregators can be scheduled to check for new content periodically. Web feeds are an example of pull technology, although they may appear to push content to the user.

Benefits

Web feeds have some advantages compared to receiving frequently published content via an email:

·         Users do not disclose their email address when subscribing to a feed and so are not increasing their exposure to threats associated with email: spam, viruses, phishing, and identity theft.

·         Users do not have to send an unsubscribe request to stop receiving news. They simply remove the feed from their aggregator.

·         The feed items are automatically sorted (unlike an email box where messages must be sorted by user-defined rules and pattern matching).

Vehicular Communication Systems


VCS are an emerging type of network systems in which vehicles and roadside units are the communicating nodes, providing each other with information, such as safety warnings and traffic information, thereby being more effective in avoiding accidents and traffic congestions due to the cooperative approach.

The two types of nodes in vehicular communication systems, vehicles and roadside stations, are both  Dedicated Short Range Communications (DSRC) devices which work in the 5.9 GHz band with a bandwidth of 75 MHz and approximate range of 1000m.

Technical specifications:

Two categories of draft standards provide outlines for vehicular networks. These standards constitute a category of IEEE standards for a special mode of operation of IEEE 802.11 for vehicular networks called Wireless Access in Vehicular Environments (WAVE). IEEE 1609 is a family of standards which deals with issues such as management and security of the network:
·         1609.1 -Resource Manager: This standard provides a resource manager for WAVE, allowing communication between remote applications and vehicles.
·         1609.2 -Security Services for Applications and Management Messages
·         1609.3 -Networking Services: This standard addresses network layer issues in WAVE.
·         1609.4 -Multi-channel Operation: This standard deals with communications through multiple channels. 

Applications

Following are the categories of the possible applications of vehicular communication system:

·         Safety
·         Traffic management
·         Driver assistance systems
·         Policing and enforcement
·         Pricing and payments
·         Direction and route optimization
·         Travel-related information
·         General information services
·         Automated highways

Vehicular communications are usually developed as a part of a bigger, Intelligent Transport Systems (ITS) network. ITS seeks to achieve safety and productivity through intelligent transportation which integrates communication between mobile and fixed nodes. To this end ITS heavily relies on wired and wireless communications.


Space Time Code


STC is a method employed to improve the reliability of data transmission in wireless communication systems using multiple transmit antennas. STCs rely on transmitting multiple, redundant copies of a data stream to the receiver in the hope that at least some of them may survive the physical path between transmission and reception in a good enough state to allow reliable decoding.
                                                                                         
Space time codes may be split into two main types:

Space–time trellis codes (STTCs) – Space–time trellis codes (STTCs) are a type of space–time code used in multiple-antenna wireless communications. This scheme transmits multiple, redundant copies of a trellis (or convolutional) code distributed over time and a number of antennas ('space'). These multiple, 'diverse' copies of the data are used by the receiver to attempt to reconstruct the actual transmitted data. For an STC to be used, there must necessarily be multiple transmit antennas, but only a single receive antennas is required; nevertheless multiple receive antennas are often used since the performance of the system is improved by so doing.

Space–time block code (STBCs) – Space–time block coding is a technique used in wireless communications to transmit multiple copies of a data stream across a number of antennas and to exploit the various received versions of the data to improve the reliability of data-transfer. The fact that the transmitted signal must traverse a potentially difficult environment with scattering, reflection, refraction and so on and may then be further corrupted by thermal noise in the receiver means that some of the received copies of the data will be 'better' than others. This redundancy results in a higher chance of being able to use one or more of the received copies to correctly decode the received signal. In fact, space–time coding combines all the copies of the received signal in an optimal way to extract as much information from each of them as possible.

In contrast to space–time block codes (STBCs), STTCs are able to provide both coding gain and diversity gain and have a better bit-error rate performance. However, being based on trellis codes, STTCs are more complex than STBCs to encode and decode; STTCs rely on a Viterbi decoder at the receiver where STBCs need only linear processing.

SODAR


 Sonic Detection And Ranging is a meteorological instrument used as a wind profiler to measure the scattering of sound waves by atmospheric turbulence. SODAR systems are used to measure wind speed at various heights above the ground, and the thermodynamic structure of the lower layer of the atmosphere.

Sodar systems are like radar (radio detection and ranging) systems except that sound waves rather than radio waves are used for detection. Other names used for sodar systems include sounder, echosounder and acoustic radar.

Commercial sodars operated for the purpose of collecting upper-air wind measurements consist of antennas that transmit and receive acoustic signals. A mono-static system uses the same antenna for transmitting and receiving, while a bi-static system uses separate antennas. The difference between the two antenna systems determines whether atmospheric scattering is by temperature fluctuations (in mono-static systems), or by both temperature and wind velocity fluctuations (in bi-static systems).

Phased-array antenna systems use a single array of speaker drivers and horns (transducers), and the beams are electronically steered by phasing the transducers appropriately. To set up a phased-array antenna, the pointing direction of the array is either level, or oriented as specified by the manufacturer. 

The vertical range of sodars is approximately 0.2 to 2 kilometers (km) and is a function of frequency, power output, atmospheric stability, turbulence, and, most importantly, the noise environment in which a sodar is operated. Operating frequencies range from less than 1000 Hz to over 4000 Hz, with power levels up to several hundred watts.

Applications
Traditionally used in atmospheric research, sodars are now being applied as an alternative to traditional wind monitoring for the development of wind power projects. Sodars used for wind power applications are typically focused on a measurement range from 50m to 200m above ground level, corresponding to the size of modern wind turbines.

UltraViolet


UV is a digital rights authentication and cloud-based licensing system that provides users a "buy once, play anywhere" approach allowing them to store digital proof-of-purchases under one account to enable playback of content that is platform- and point-of-sale-agnostic.

UltraViolet is deployed by the 74 members of the Digital Entertainment Content Ecosystem consortium (DECE) and is a standalone application for devices that allow streaming & downloads of pre-purchased media.

User experience
Content consumers create a free-of-charge UltraViolet account, either through a participating service provider, or through the UltraViolet website, with six accounts allowed per household. An UltraViolet account provides access to a Digital Rights Locker where licenses for purchased content are stored and managed irrespective of the point of sale. The account holder may register up to 12 devices for streaming and/or downloading for transfer onto physical media (e.g. DVDs, SD cards, flash memory). Once downloaded, an UltraViolet file can be played on any UltraViolet player registered to the household account, but it will not play on devices which are not compatible with UltraViolet. Files can also be streamed over the Internet. Up to three streams can be simultaneously transmitted. Compatible devices include set-top boxes as well as Internet-enabled devices such as computers, game consoles, Blu-ray players, Internet TVs, smartphones, and tablets.

Digital locker
UltraViolet does not store files, and is not a "cloud storage" platform. The rights for purchased or rented content are stored on the service. UltraViolet only coordinates and manages the licenses for each account, but not the content itself. The content may be obtained in any way, in its standardized multi-DRM container format. By creating a digital-rights locker rather than a digital media storage locker, UltraViolet bypasses the cost of storage and bandwidth used when the media is accessed. In addition, by only managing the rights and licensing of content, UltraViolet insulates itself from future technological advances, allowing users to keep watching content they have purchased.

Standard File Formats
UltraViolet content is downloaded or streamed in the Common File Format, using the Common Encryption (CENC) system. This format is based on the Base ISO File Format, and ensures that a consistent set of codecs, media formats, DRMs, subtitling, and other kinds of data are used across the whole UltraViolet ecosystem. Because every UltraViolet title arrives in this format, it will generally play on any UltraViolet branded device.

UltraViolet files use SMPTE Timed Text (SMPTE TT), which is in turn based on the W3C Timed Text Markup Language (TTML). TT incorporates both Unicode text and PNG graphics for captions, subtitles for the deaf and hard of hearing (SDH), and other types of subtitles and sub pictures such as sign language and written commentaries.

Data Assimilation


Data assimilation is the process by which observations are incorporated into a computer model of a real system to provide an analysis of a current scenario and forecast future scenarios. Applications of data assimilation arise in many fields of geosciences, perhaps most importantly in weather forecasting and hydrology.

Data assimilation proceeds by analysis cycles. Considering weather prediction as an example, in each analysis cycle, observations of the current (and possibly past) state of a system are combined with the results from a numerical weather prediction (NWP) model (the forecast) to produce an analysis, which is considered as 'the best' estimate of the current state of the system. This is called the analysis step. Essentially, the analysis step tries to balance the uncertainty in the data and in the forecast. The model is then advanced in time and its result becomes the forecast in the next analysis cycle.

Weather forecasting applications
Data assimilation is used for combining observations of variables like temperature and atmospheric pressure into numerical models to predict weather.

In weather forecasting there are 2 main types of data assimilation: 3 dimensional (3DDA) and 4 dimensional (4DDA). In 3DDA only those observations available at the time of analysis are used. In 4DDA the future observations are included (thus, time dimension added).

Future Development in NWP
The rapid development of the various data assimilation methods for NWP models is connected with the two main points in the field of numerical weather prediction:
1.      Utilizing the observations currently seems to be the most promising chance to improve the quality of the forecasts at the different spatial scales (from the planetary scale to the local city, or even street scale) and time scales.
2.      The number of different kinds of available observations (sodars, radars, satellite) is rapidly growing.

Other applications of Data Assimilation
Data assimilation methods are currently also used in other environmental forecasting problems, e.g. in hydrological forecasting.

Given the abundance of spacecraft data for other planets in the Solar System, data assimilation is now also applied beyond the Earth to obtain re-analyses of the atmospheric state of extraterrestrial planets. Mars is the first extraterrestrial planet which data assimilation has been applied to, so far.