Here's a gamble: pick a letter of the alphabet between A and G (inclusive). Now be prepared to bet a lot of money that your choice represents an enduring technology direction. That's something of the challenge facing organizations contemplating wireless networking rollouts in the next 18 months. Do you sink your money into equipment enabled to run on the emerging 802.11a and 802.11g Wi-Fi standards, or look for new and improved services being offered over conventional cellular networks (in the expectation that those services, some of which remain expensive, will become cheaper)?
Looking further ahead, the next decade will see many changes, most likely including far wider availability of Wi-Fi hotspots, the emergence of metro-area Wi-Fi networks and the extension of Wi-Fi access to vehicles and planes (with which at least two airlines are experimenting right now).
Security in the Wi-Fi area will (presumably) be a lot better, with users demanding much higher standards than at present as the technology moves into the mainstream.
The Wi-Fi world is set to receive a boost in the near future if a draft standard before the Institute of Electrical and Electronics Engineers (IEEE) is accepted.
The 802.3af draft outlines a way to run electricity over ethernet cables, paving the way for easier deployment of wireless LANs by removing the need to run both power and network lines to wireless LAN endpoints.
At least two vendors, 3Com and Foundry Networks, have released PoE (power over ethernet) hardware, as the technology also has benefits for wired networks.
Foremost among wireless questions to be answered in the next few years is whether 802.11a or 802.11g will become the natural successor to the present wireless LAN industry standard, 802.11b.
Both offer speeds of up to 54Mbit/s in ideal conditions, which means in reality that you might get 30Mbit/s or so.
802.11a operates in a different radio frequency band to 802.11b and thus for those already using b there are compatibility issues with migrating to a, whereas g offers a clearer path, running in the same spectrum as b.
Some wireless LAN hardware manufacturers are going with g exclusively, but most are hedging their bets and shipping dual-mode products, so just which will become b's natural successor is unclear at this stage.
Once the a-g contest is settled, however, it won't be the end of wireless LAN development — security, generally cited as the biggest flaw in the wireless LAN world today, will be addressed through 802.1x, a user authentication mechanism and EAP (extensible authentication protocol), of which there are many variants.
Once security concerns are addressed and assuming wireless LAN use really takes off, further steps in its development may include 802.11e, which would enable a, g, or b if that's still the dominant standard, to run better and enable streaming multimedia to be delivered via wireless LANs in consumers' homes.
Going further down the track, 802.15.3, or high rate wireless personal area networks, will further enhance the delivery of multimedia content in users' homes, but its range is extremely limited (about 10m) and has been touted as a competitor to Bluetooth more than to 802.11 wireless LANs.
Wide area network cellular technologies are likely to see a lot of change in the next decade as well, with may tipping that the competing GSM-GPRS(Glopal System for Mobile Communications)-(General Packet Radio Service) and CDMA (Code Division Multiple Access) cellular data standards will ultimately converge in UMTS (Universal Mobile Telecommunications System) TD-CDMA, also known as wideband CDMA.
After that, who knows where it will go and by then, there may well be 802.11-based metropolitan networks that use mesh architecture to give pervasive coverage in areas of 20km or so and which seamlessly pass users over to GPRS/CDMA networks when they leave the metro area.
U.S. scientist Joseph Mitola has come up with a concept he believes will enhance third- and fourth-generation cellular offerings.
He calls it cognitive radio. It would involve giving wireless devices learning capability, so they would be able to learn what their owners like and don't like.
"It would have enough flexibility in the hardware to be programmed to a band or mode, so instead of being stuck in the 800-900MHz band, it would be able to adapt over to an ISM, IEEE or 5GHz.
"The cognitive radio would know what to do based on experience; it knows where home is.
"You get in the car to go to work and it's measuring the radio propagation, signal strength and the quality of the different bands as it drives around with you.
"It's building this nifty internal database of what it can do when and where."
Metropolitan area networks
Wireless MANs (metropolitan area networks) based on meshed Wi-Fi may be on the way, but the wired variety of MAN isn't going to be supplanted any time soon and will experience considerable enhancements over the next decade.
Among them are likely to be uptake of 10Gbit/s ethernet, increased use for storage-related activities (in light of the recent approval of the iSCSI standard for computing over the Internet) and bandwidth-on-demand from consumers using content services delivered over the MAN.
One of the biggest MAN developments in the past few years has been the scaling up of ethernet to 1Gbit/s, thus allowing the well-established and low-cost LAN (local area network) technology to be extended to the MAN.
Richard Naylor, technical director at Wellington's MAN operator CityLink, said now that the 10Gbit/ ethernet standard is out, we'll see a lot more 10Gbit/s ethernet gear.
"The next debate will be whether it will be 40 or 100Gbit/s. It may be both, 40Gbit/s for telcos and 100Gbit/s for the LAN market."
Regarding applications, the name of the game in the next few years is going to be video, he said, particularly desk-to-desk videoconferencing.
"There's already a gigabit to the desktop and you can get broadcast quality video at less than 1Gbit/s today."
Naylor believes iSCSI and IP (Internet Protocol) storage over MANs won't mean much for small and medium-sized enterprises, which will continue to be able to get inexpensive storage by other means.
However, for larger enterprises it's very significant; storage service providers will be able to link direct with their customers, rather than locating storage points of presence at a co-location facility.
Microsoft is the latest, in a list of vendors that includes HP and Network Appliance, to announce plans to add iSCSI support to its products.
VDSL (very high bit rate DSL), presently in place in several locations around the world, may become more commonplace as the cost of the presently expensive necessary technology comes down. Most rollouts of VDSL take advantage of existing utility infrastructure.
VDSL offers speeds of up to 52kbit/s down and 16Mbit/s up, as opposed to ADSL, the most common DSL variant today and the one Telecom New Zealand's JetStream is based on, which allows a minimum of 2Mbit/s down and 250kbit/s up.
Much of the high cost comes with enabling DSL exchanges much closer to users' homes than is necessary with ADSL. To ADSL-enable a residence, the exchange must be no more than 5km away, but with VDSL that distance is 300m.
Another emerging technology that aims to bring broadband to the home at much faster rates than ADSL is ETTH (ethernet to the home), standards for which are formally being developed by the Ethernet in the First Mile Alliance (EFMA), a consortium of representatives from the IEEE and companies including Cisco, Ericsson and Lucent.
The EFMA is looking at enabling ethernet over existing copper wire, which would be far more cost-effective than trying to get fiber to every home, but the most likely outcome that it will be delivered over a mixture of copper and fiber.
Another IEEE standard allows for gigabit ethernet over copper to commercial buildings, but uptake has been slow due to cost, though that is tipped to change.
10Gbit/s may also have application in areas such as linking supercomputers, such as the Beowulf Cluster at Massey University, with other ultra-high processing devices and the development of NGI (next generation Internet) networks, with speeds of 40Gbit/s or more, will continue, though those are likely to remain private and be used by research institutions for super-bandwidth intensive work such as modelling biological and physiological material.
Work has begun on an NGI for New Zealand, with the NGI Consortium formed last year.
People in remote areas unable to access the internet over wires are likely to have more options as means of transmitting wirelessly move beyond the LAN and are extended by use of directional antennas and daisy chaining to cover many kilometers.
A significant trend within telecoms carrier in coming years is likely to be the replacement of class five switches with soft switches, software-based appliances that perform the same function.
The International Softswitch Consortium defines a soft switch as "a software-based entity that provides call control functionality." Telecom plans to make full use of them in its all-IP network, which is in the early stages of being rolled out in partnership with Alcatel SA.
"As we change, the PSTN (public phone network) will become a number of soft switches and corporate customers are also buying soft switches," said Telecom network investment general manager Rhoda Holmes.
"The enterprise and PSTN soft switches will talk to each other as the PSTN is replaced and the core soft switches will be loaded up with applications, with the business soft switches able to pull it down [from the carrier one] rather than put it on their own network. They'll talk directly over IP without all the lower layers of technology."
The PSTN will most likely be largely gone in 10 years' time, Holmes said.
The soft switch market may see some vendor consolidation in the immediate future. Lucent Technologies abandoned a soft switch project last year, though that probably had more to with Lucent's specific business woes than with the state of the soft switch market.
Hardware carrier and enterprise switches are likely to remain around for some time, but will have increasingly complex functionality and move well beyond layer two, the data layer, to where many switches in use today are limited in their use.
Switches will become ever more content-aware, taking on functions formerly carried out by servers. The future of switching can be seen in cutting-edge products being released by small start-ups such as Forum Systems, Sarvega and Vernier Networks.
Forum has developed an appliance which can encrypt XML data and Sarvega also has XML-equipped gear — its XPE 2000 switch can read incoming XML content, send it according to priorities set by users and check it for authenticity.
Vernier has a network edge switch which can inspect and filter packets at high speed.
Such functions will be vital if web services are widely deployed, as XML data will need to be verified effectively and quickly.
Adding functionality to the layers is the cutting edge. Most big names such as Enterasys Networks Inc., Nortel Networks Co. and Cisco Systems Inc. are adding it at layers two to five, not at level seven, the application layer, that the startups are engaged in.
What it all adds up to is that traditional distinctions between switches and routers will become more and more blurred.
Another smaller U.S. operator, NetScaler Inc., has a device it calls the Request Switch 9000 iON, which uses layer seven inspection capability to send HTTP requests to the correct web server and also has security problem detection capability, among other functions.
Despite being called a switch by its maker, it also has SSL appliance, load balancing, security and routing capabilities. -- Computerworld New Zealand Online