online marketing Indian Telecom Buzz: February 2012

Wednesday, 29 February 2012

Universal Mobile Charger


In partnership with many leading mobile operators and manufacturers, the GSM Association has announced a commitment to implementing a cross-industry standard for a Universal Charging Solution (UCS) for new mobile phones. The main objective is to adopt a common format for mobile phone charger connections and energy-efficient chargers, world-wide. The initiative aims at making chargers which have advantages like
  • reduce standby energy consumption
  • eliminate thousands of tonnes of duplicate chargers
  • enhance the end-user experience for mobile customers
This has been further endorsed by the ITU also. The European Commission recently reached an agreement with major phone providers for the UCS to work with all data-enabled phones sold in the European Union.The product definition includes common power supply with a detachable cable based on USB-IF standards.

UCS advantage
UCS is based on a Common Power Supply (CPS) having atleast a 4-star or higher energy rating. It will meet all efficiency regulations. With UCS in place, fewer chargers need to be manufactured each year which helps in reducing greenhouse gases produced in making and delivery of the replacement chargers. The widespread adoption of a Universal Charging Solution (UCS) is expected to result in:
  • up to 50% reduction in standby energy consumption
  • elimination of up to 51,000 tonnes of duplicate chargers
  • enhance the end user experience and simplify the charging of mobile devices

For the consumer, charging a mobile device will simplify the end-user experience. Consumers will be able to carry fewer chargers and charge mobile phones anywhere from any available charger. Consumers will also be able to re-use chargers even when they upgrade their phone or if they have different mobile phones from different manufacturers but still want to carry and use a single charger.

The inititative was launched in 2009 and the group expects a UCS world by 2012.

Mobile TV Technologies


It has been quite some time that we have seen some form of mobile TV or other. The technology has been existing for a while but has not matured yet. It is yet to find its foothold in many countries around the world. Mobile TV is expected to combine broadcast content with streamed and downloaded contents.

Mobile TV
Mobile TV means television contents that can be watched on small hand-held devices. It may be a pay TV service broadcast on mobile phone networks or received free-to-air via terrestrial television stations from either regular broadcast or a special mobile TV transmission format. Some mobile televisions can also download television shows from the internet, including recorded TV programs and podcasts.i.e. the content may be obtained either through an existing cellular network or a propriety network.

Standards
DVB-H (Digital Video Broadcasting - Handheld)
This is the major one of the mobile TV formats. DVB-H was formally adopted as ETSI standard as early as in November 2004. DVB-SH (Satellite to Handhelds), DVB-NGH (Next Generation Handheld) are possible enhancements to DVB-H, providing improved spectral efficiency and better modulation flexibility.

ATSC-M/H (Advanced Television Systems Committee - Mobile/Handheld)
This standard for mobile digital TV allows TV broadcasts to be received by mobile devices. ATSC-M/H is an extension to the available digital TV broadcasting standard ATSC A/53. ATSC is optimized for a fixed reception and uses 8VSB modulation.

MediaFLO 
This technology transmits video and data to portable devices. In the United States, the service powered by this technology is branded as FLO TV. Broadcast data transmitted via MediaFLO includes live, real time audio and video streams, as well as scheduled video and audio clips and shows. The technology can also carry Internet Protocol data-cast application data.

The Faster Internet


The Internet is founded on a very simple premise: shared communications links are more efficient than dedicated channels that lie idle much of the time. And so we share. We share local area networks at work and neighborhood links from home. And then we share again—at any given time, a terabit backbone cable is shared among thousands of users surfing the Web, downloading videos.. But there’s a profound flaw in the protocol that governs how people share the Internet’s capacity. The protocol allows you to seem to be polite, even as you elbow others aside, taking far more resources than they do.

You might be shocked to learn that the designers of the Internet intended that your share of Internet capacity would be determined by what your own software considered fair. They gave network operators no mediating role between the conflicting demands of the Internet’s hosts. The Internet’s primary sharing algorithm is built into the Transmission Control Protocol, a routine on your own computer that most programs . TCP is one of the twin pillars of the Internet, the other being the Internet Protocol, which delivers packets of data to particular
addresses. The two together are often called TCP/IP.

Forcing the way!
TCP routine constantly increases your transmission rate until packets fail to get through!Then TCP very politely halves your bit rate. The mechanism is termed "binary exponential back-off". What a name isn't it? All other TCP routines around the Internet behave in just the same way, in a cycle of taking, then giving, that fills the pipes while sharing them equally.

Fair play?
An equal bit rate for each data flow is likely to be extremely unfair, by any realistic definition. It’s like insisting that boxes of food rations must all be the same size, no matter how often each person returns for more or how many boxes are taken each time. But any programmer can just run the TCP routine multiple times to get multiple shares. It’s much like getting around a food-rationing system by duplicating ration coupons. This trick has always been recognized as a way to sidestep TCP’s rules—the frst Web browsers opened four TCP connections!

The solution!
There’s a far better solution- according to Bob Briscoe. It would allow light browsing to go blisteringly fast but hardly prolong heavy downloads at all. The solution comes in two parts. It begins by making it easier for programmers to run TCP multiple times—a deliberate break from TCP-friendliness. They set a new parameter—a weight—so that whenever your data
comes up against others all trying to get through the same bottleneck, you’ll a share of the total. The key is to set the weights high for light interactive usage, like surfing the Web, and low for heavy usage, such as movie downloading.

Imagine a world where some Internet service providers offer a deal for a fat price but with a monthly congestion-volume allowance. Note that this allowance doesn’t limit downloads as such; it limits only those that persist during congestion. If you used a peer-to-peer program like BitTorrent to download 10 videos continuously, you wouldn’t bust your allowance so long as your TCP weight was set low enough. Your downloads would draw back during the brief moments when flows came along with higher weights. But in the end, your video downloads would finish hardly later than they do today.

4G: The race of 2 Technologies


A long-term battle is brewing between two emerging high-speed wireless technologies, WiMax and Long Term Evolution (LTE). Each would more than quadruple existing wireless wide-area access speeds for users. Both are 4G technologies designed to move data rather than voice. Both are IP networks based on OFDM technology


The two technologies are somewhat alike in the way they transmit signals and even in their network speeds. The meaningful differences have more to do with politics - specifically, which carriers will offer which technology.

The Genesis
WiMax is based on a IEEE standard (802.16).It’s an open standard that was debated by a large community of engineers before getting ratified.The level of openness means WiMax equipment is standard and therefore cheaper to buy!

LTE or Long Term Evolution is a 4G wireless technology and is considered the next in line in the GSM evolution path after UMTS/HSPDA 3G technologies. LTE is espoused and standardized via the 3GPP or 3rd Generation Partnership Project members. 3GPP is a global telecommunications consortium having members in most GSM dominant countries.

LTE vs WiMAX
Whereas WiMAX emerged from the WiFi IP paradigm, LTE is a result of the classic GSM technology path. LTE is behind in the race to 4G with WiMAX getting an early lead with the likes of Sprint ClearWire and several operators in Asia opting to go with WiMAX in the near term. So where WiMAX has a speed to market advantage, LTE has massive adoption and GSM parenthood to back it up.

LTE will take time to roll out, with deployments reaching mass adoption by 2012 . WiMax is out now, and more networks should be available later this year.

Speed offered
LTE will be faster than the current generation of WiMax as per well known text books, but 802.16m that should be ratified this year, offers similar speeds.The speeds expected by both LTE and WiMax are hard to nail down primarily because the technologies are just rolling out. But many factors will have to be taken into consideration.Speed to an end user is also dependent on how many users are connected to a cell tower, how far away they are, what frequency is used, the processing power of the user's device, and other factors.

Who will win?
For end users, the current debate over WiMax vs. LTE is largely theoretical but is nonetheless important.Analysts see a clear dominance by LTE in a few years, since so many carriers are bound to adopt it. However, that won't serve every user or every company. It is still going to be a combination of technologies and developers, WiMax may be one of those; but not the only one!!!