|
� MAIN PROGRAMME � |
Abstracts |
Keynote speech: Modern academia: teaching, research, development, patents and standards, Uday B. Desai (Director, Indian Institute of Technology, Hyderabad, India)
Academia has traditionally focused on
teaching, research and development.
These three aspects form the core of
academic paradigm in most institutions.
Moreover, they are in themselves quite
demanding. In India, most academic
institutions are working towards
establishing themselves as leading
research institutions – in fact, they
are endeavouring to create an
innovations culture. With the growing
awareness on how important it is to
create IPR, there is a new dimension
that is added to academic pursuit,
namely, patents and standards. Standards
are very vital to today’s technological
development and are, perhaps, even more
vital to take technology to the market.
Moreover, standards are a source of
revenue not only to institutions but to
the nation.
Thus, today there is a need to rework
the academic structure. It is not
necessary that every faculty does all
four; nevertheless, it is imperative
that there are enough faculty that give
emphasis on standards. In fact, once the
realization sets in, that taking
research to market is closely entwined
with standards – there will be an
automatic emphasis on standards.
It is also important to recognize that
for research to get incorporated into
standards there has to be active
collaborations with industries. This
collaboration is where Indian academia
is weak.
In this talk, first a brief perspective
on how standards activity can be
incorporated into academia without
compromising on the existing academic
paradigms will be given. Then, some
research activity in India, in the area
of ICT that could have impact on
standards will be mentioned. Also, there
will be a brief description on some of
the ongoing efforts on standards
development, by academicians, in India.
The talk will conclude on what are the
possible avenues for academia to move
forward and make major contributions to
international standards. |
Keynote speech: Vehicle communication: a future telecommunication market, Tadao Saito (Professor Emeritus, University of Tokyo, Japan)
Because of the rapid development of
electronics, the market of information
and communication technology changes
rapidly. The cost reduction of advanced
electronics made communication
equipments inexpensive, and nowadays
almost all people in the world have cell
phone. This means that the market of
telecommunication for human users is
near to saturation. Although
telecommunication technology changed
rapidly and the performance of
communication has improved, it is
difficult to have higher price for
higher performance, so and the market
expansion will be incremental.
As a promising new market, vehicle could
take advantage of telecommunications to
develop a variety of applications for
safety, comfort and operation support.
The connected vehicle technology is a
new competition edge in the car
industry. In order to develop these
applications, the telecommunication
performance parameters need to be
redesigned. Vehicle telecommunication
could be subdivided in two different
classes: “transport telematics using
telecommunication” and “intelligent
transport system (ITS) using dedicated
short range communication”. The boarder
of this classification is dynamically
changing to expand the territory of
transport telematics.
The presentation covers some examples of
vehicle communication and explains new
ranges of performance parameters. To
promote the future “network of things”
market, designing properly the
performance parameter sets is needed;
the current telecommunication
development follows a different path.
Finally, a new set of requirements for
Next Generation Networks can be derived
from the analysis of future
applications. |
Keynote speech: Future of communications?
The individual user experience, Detlev Otto (CTO, Nokia Siemens Networks, Germany)
The advent of smart devices combined
with HSPA network capabilities changed
many things in the mobile communications
food-chain. We could say the network
changed from service provisioning to
service enabling.
On the one side users are now masters of
their services and the network is used
as a transparent pipe. On the other side
network resource utilization left
predictability. The third "mega-trend"
seen is connected objects or machine to
machine (M2M) devices. Forecasts see as
much as ten times more connected
objects/devices in 2020 compared to
2010.
Obviously the users used their chance
and with the help of smart devices
(phones, tablets) and already started to
individualize their communications needs
and behaviour – and we are only at the
beginning of that era. The operators
will have to respond to this and make
sure their network resources are
utilized in relation to the ARPU they
can get. They will have to figure out
how to participate in this changed
"communications food-chain" and how to
manage a service enabling network.
Here we can see three "next big things"
for 2011 and beyond:
I. Capacity or resource management
II. Monetizing the service enabling
network
III. Network transformation from network
to service management
The first big thing focuses on the user
experience. The users are
individualizing their services and the
operators need to individualize the user
experience in the same way. The
communication networks need to learn to
differentiate between the services and
which service and user should have which
resource in which situation.
The second big thing is the need or wish
to participate in innovative new
services and revenue streams. These will
be born mainly out of two areas: a)
blended services combining user insight
of the network operator with services
from the cloud and b) enterprise
services. Both require the network to do
two things smarter than today: identity
management and smart charging.
The third big thing is a network
transformation from network management
to service management. As part of this,
today’s BSS "food-chain" will need to
manage millions of machines and objects
connected and planning and operations
processes will leave stove-pipes for
radio, core, transport in favour of
services to be managed end to end. IP
will be a key enabler assisting this
transformation. |
S2.1 Invited paper: Toward a polymorphic future internet: a networking science approach
Kav� Salamatian (Professor, Universit� de Savoie, France)
In this paper, I will develop two major
claims. First the, Future Internet
should be polymorphic and conciliate
different architectural paradigms
networking. The second claim is that the
Future Internet should be build on
strong theoretical basis from a
Networking science that is in course of
development. In this paper, I have used
the concept of cooperation as an
interpretation lens. Specifically, I
will describe how virtualisation make
possible a polymorphic future Internet
and enables the easy deployment of new
cooperation schemes. The next aspect
that I describe in this paper is
relative to security in the future
Internet. Particularly the paper
advocates the necessity of three major
components: a secure execution platform,
an authentication mechanism, and a
monitoring component. Finally, I will
show that it is possible to build
scalable addressing and routing scheme
but at the condition of following a
clean slate approach. |
S2.2 Introducing elasticity and adaptation into the optical domain toward more efficient and scalable optical transport networks*
Masahiko Jinno, Yoshiaki Sone, Osamu Ishida, Takuya Ohara, Akira Hirano, Masahito Tomizawa (NTT, Japan)
There is growing recognition that we are
rapidly approaching the physical
capacity limit of standard optical
fiber. It is important to make better
use of optical network resources to
accommodate the ever-increasing traffic
demand to support the future Internet
and services. We first introduce an
architecture, enabling technologies, and
the benefits of recently proposed
spectrum-efficient and scalable elastic
optical path networks. In these
networks, the required minimum spectral
resources are adaptively allocated to an
optical path based on traffic demand and
network conditions. We then present
possible adoption scenarios from current
rigid optical networks to elastic
optical path networks. We also discuss
some possible study items that are
relevant to the future activities of
ITU-T. These items include optical
transport network (OTN) architecture,
structure and mapping of the optical
transport unit, automatically switched
optical network (ASON) control plane
issues, and some physical aspects with
possible extension of the current
frequency grid. |
S2.3 Introducing multi-ID and multi-locator into network architecture*
Ved P. Kafle, Masugi Inoue (National Institute of Information and Communications Technology, Japan)
The present day Internet has no separate
namespace for host IDs. It uses IP
addresses as host IDs, which are in fact
locators. This dual role is problematic
for mobility, multihoming, security, and
routing on the Internet. To solve these
problems, research has recently begun on
ID/locator split architectures. Some
standardization activities based on this
concept are also progressing in ITU-T
Study Group 13 and in the IETF. We
expect that introduction of the
ID/locator split concept into the new
generation network or future Internet
architecture can bring about additional
functions, such as heterogeneous network
protocol support, multicast, QoS,
resource or service discovery, and
flexible human-network interaction.
Toward realization of these functions,
this paper presents a study on an
approach of introducing multi-ID and
multi-locator support into the network
architecture. The paper also lists items
that have the potential to be
standardized in ITU-T. |
S3.1: Invited paper: Can computational thinking reduce marginalization in the future internet?
Peter Wentworth (Professor, Rhodes University, South Africa)
Maths is presently regarded as the key
driver that underpins Science, Education
and Technology (SET) skills. In spite of
significant studies, investment and
efforts, math skills and widespread
enthusiasm for SET remain elusive. In
South Africa's disadvantaged
communities, poor quality maths teaching
and poor maths performance, both
legacies of past political engineering,
further fuel marginalization.
Computational thinking is a new
characterization of some specific
procedural thinking, abstraction,
problem solving and organizational
skills that are finding their way from
computer science programs into other
fields.
The paper describes our refocus of
content in BingBee, a SET skill-building
kiosk project targeting disadvantaged
communities. As we shift to emphasize
computational thinking more explicitly,
we speculate that these skills could
complement, and perhaps eventually
displace, some elements of maths as the
dominant driver of SET.
The confluence of better tools, open
service interfaces, and the rapid spread
of handsets and devices into
marginalized communities is an
opportunity to build more widespread
computational thinking skills. This
could in turn facilitate a future
Internet which is more inclusive, and in
which users are able to create their own
services. |
S3.2: Invited paper: Challenges the Internet poses to the policymaker
Arun Mehta (President, Bidirectional Access Promotion Society, India)
This paper addresses policymakers at
national and international levels --
regulators, standards bodies,
politicians – arguing that there is no
“beyond” the Internet. With the Internet
so intimately intertwined with the lives
of people, being used to build the
backbone of large, important
communities, an attempt to replace it
with a new network would generate
immense friction, and cost a lot. The
transition would take long, because lots
of complex software would need to be
written, disrupting critical processes
of the economy, indeed of governance. A
plethora of regulators with very
different manners and degrees of control
would have to learn to work together at
an international level, otherwise we
might revert to the lawlessness of the
Internet. The lost opportunity of Minitel, the botched attempt to look
beyond the Internet in the 1990s via
X.400 and the bankruptcy of large
telecommunication companies in the wake
of the dotcom boom are useful in
appreciating the historical context and
learning lessons from. Instead of
looking beyond, the ITU should play a
constructive role vis-�-vis the
Internet. Suggestions presented are
elimination of spam, and making the
Internet accessible to all. These make
commercial sense too. |
S3.3: Participatory approach to the reduction of the digital gap in Amazon Region of Ecuador in the framework of the "innovation for development" program
Alessandro Galardini1, Daniele Trinchero1, Benedetta Fiorelli1; Salvatore Pappalardo2 (1Politecnico di Torino;
2University of Padova, Italy)
This work illustrates the methodological
approach followed in the Province of
Orellana, Eastern Ecuador, for the
realization of a telecommunication
network infrastructure between the
capital of the Province, the city of
Puerto Francisco de Orellana (also known
as El Coca), and some peripheral
communities located in the surrounding
of the tropical moist forest. The
project has been implemented in one of
the poorest countries of Latin America,
in a remote and disadvantaged area where
the lack of communication
infrastructures and the absence of
almost all public services generates a
strong migration towards the capital. In
this context, in 2008, it was conceived
a project for the development of a
communication system that allows the
provisioning of basic intranet services
for distance learning, telemedicine and
internet connectivity. The main scope of
the project was the development of an
approach focused on the technological
transfer to the local population, to
start a reduction process of the digital
gap in the area. The aim of the project
has been achieved thanks to the direct
enrolment of local municipalities, small
entrepreneurs, communities and local
NGO. The technological transfer to local
players and the choice of a suitable
platform, designed for a simplified, low
cost management, guarantee the
sustainability and scalability of the
project. The declaration of interest in
the infrastructure by the Municipality
enables the economic sustainability of
the project. |
S4.1 Invited paper: A vision on the information and communication technologies using cloud computing environment
Hiroshi Yasuda (Professor, Tokyo Denki University, Japan)
The government of Japan has announced
the new ICT policy in June 2010. One of
the points of the new policy is to start
the 3D motion image content market in
order to create new key industries in
the near future as 3D motion image
content will become most powerful media
for CGM (Consumer Generated Media). In
order to activate 3D motion image
content industries, the development of
an effective and simple tool for making
3D motion image content even by
non-experienced people, is required. The
Digital Movie Director (DMD) developed
by the author, is being evolved as such
an effective and simple tool. However,
the big computational power requirement
in making 3D motion image content has
prevented DMD from being widely
deployed. The cloud computing technology
is supposed to solve this problem, thus,
in this paper, the future prospects of
the 3D motion image content industries
with the cloud computing technology will
be explained. |
S4.2 Hybrid circuit/packet networks with dynamic capacity partitioning
Chaitanya S. K. Vadrevu1, Menglin Liu1, Biswanath Mukherjee1; Chin Guok2, Evangelos Chaniotakis2, Inder Monga2; Massimo Tornatore3 (1University
of California,
2Energy Sciences Network, USA;
3Politecnico di Milano, Italy)
In this paper, we consider hybrid
circuit/packet networks. A hybrid
circuit/packet network consists of a
circuit network co-existing with a
packet network; generally the packet
network is embedded on top of the
circuit network. However, in certain
cases such as the DOE energy sciences
network (ESnet) [4], the circuit network
and the packet network are deployed
side-by-side (e.g. they have common
end-node sites and equipment), but they
are logically separate and they may have
physically disjoint links. Currently,
there is no capacity sharing between the
packet and the circuit sections of the
networks. In this paper, we propose and
investigate the characteristics of
schemes that enable efficient capacity
partitioning between packet and circuit
networks while ensuring survivability
and robustness of the services. We
conduct simulative experiments on ESnet
topology with realistic traffic demands.
We observe that capacity partitioning
between packet and circuit networks
enables to support services with
enhanced quality of service and
robustness along with improved resource
utilization. |
S4.3 A New Protocol Layer for User Space Functionality
Pankaj Chand (Independent Researcher, India)
Evolution of the Internet user has
brought attention to the lack of
standards for ideal levels of user
interaction. The core Internet
architecture has not evolved much since
its inception, and its user-driven
limitations typically constrain one's
personal computing infrastructure so
that the goals of pervasive and
ubiquitous computing are only
incipiently achieved. We propose to
consider the user's image, or user
space, as a significant entity in the
Internet model by introducing a new
layer of protocols into the Internet
protocol stack to support future usage
in the Internet. We also present the
Identifier/Interlocutor/Locator split
architecture for flexible addressing.
Standards for such architectures would
provide generic user support across
heterogeneous networks. |
S4.4 Quality of service in the future internet
Jorge Carapinha1; Christoph Werle2; Konstantin Miller3; Roland Bless4; Horst Roessler5, Heidrun Grob-Lipski5; Andrei Bogdan Rus6, Virgil Dobrota6 (1Portugal Telecom Inova��o,
Portugal;
2Universit�t Karlsruhe (TH),
3Berlin Institute of
Technology,
4Karlsruhe Institute of
Technology,
5Alcatel-Lucent, Germany;
6Technical University of Cluj-Napoca, Romania)
Whatever the Network of the Future turns
out to be, there is little doubt that
QoS will constitute a fundamental
requirement. However, QoS issues and the
respective solutions will not remain
unchanged. New challenges will be
raised; new ways of dealing with QoS
will be enabled by novel networking
concepts and techniques. Thus, a fresh
approach at the QoS problem will be
required. This paper addresses QoS in a
Future Internet scenario and is focused
on three emerging concepts: Network
Virtualization, enabling the coexistence
of multiple network architectures over a
common infrastructure; In-Network
Management, improving scalability of
management operations by distributing
management logic across all nodes; the
Generic Path based on the semantic
resource management concept, enabling
the design of new data transport
mechanisms and supporting different
types of communications in highly mobile
and dynamic network scenarios. |
S5.1 Cross-language
identification using wavelet
transform and artificial neural
network*
Shawki A. Al-Dubaee, Nesar Ahmad (Aligarh Muslim University, India)
With the advent of the Internet, search
engines were developed for English
language because English language was a
lingua franca. Currently, most of
popular search engines such as Google
and Yahoo! are available in more than 50
languages. However, these search engines
have received less attention in South
Asian languages especially, Urdu
language. In this paper, we propose a
novel approach for feature extraction
and classification of queries in
cross-language search engines. This
novel approach presents an automatic
method for classification of English and
Urdu languages identification. The
classifier used is a three-layered
feed-forward artificial neural network
and the feature vector is formed by
calculating the wavelet coefficients.
Three wavelet decomposition functions
(filters), namely Haar, Bior 2.2 and
Bior 3.1 have been used to extract the
feature vector set and their performance
results have been compared. The
performance results of the Haar filter
have given superior results than other
filters. |
S5.2 GeoHybrid: a hierarchical approach for accurate and scalable geographic localization*
Ibrahima Niang, Bamba Gueye, Bassirou Kasse (University Cheikh Anta DIOP of Dakar, Senegal)
Geographic location and Grid computing
are two areas that have taken off in
recent years, both receiving a lot of
attention from research community. The
Grid Resource Brokers, which tries to
find the best match between the job
requirements and the resources available
on the Grid, can take benefits by
knowing the geographic location of
clients, for a considerable improvement
of their decision-taking functions. A
measurement-based geolocation service
estimates host locations from delay
measurements taken from landmarks, which
are hosts with a known geographic
location, toward the host to be located.
Nevertheless, active measurement can
burden the network. Relying on
database-driven geolocation and active
measurements, we propose GeoHybrid.
GeoHybrid estimates the geographic
location of Internet hosts with low
overhead as well better accuracy with
respect to geolocation databases.
Afterwards, we propose a geolocation
middleware for grid computing. By
defining the architecture and the
methods of this service, we show that a
promising symbiosis may be envisaged by
the use of the proposed middleware
service for grid computing. |
S5.3 Context-aware smart environments enabling new business models and services
Christian Mannweiler1; Jose Simoes2; Boris Moltchanov3 (1University
of Kaiserslautern;
2Fraunhofer FOKUS, Germany;
3Telecom Italia, Italy)
This work describes innovative smart
environments with embedded
context-awareness technologies, enabling
new business models and consequently the
creation of new services. The
context-awareness framework presented in
this paper is taken from the results of
an EU Framework Programme (FP) 7
Information and Communications
Technologies (ICT) project. Major
novelties include a business shift from
traditional and conventional
telecommunication or ICT services
towards highly personalized, customized
and user targeted services, empowered by
a myriad of pervasive and ubiquitous
interconnected environments employing
various kinds of context information. In
this work, we show how these context
data can be technically made available
as a service and business enabler and be
used by any entity or application built
within these environments, using context
for adapting service logic or for
targeted service customization.
Moreover, it considers customer's needs
and privacy aspects, providing users
with a more immersive and less intrusive
experience at the same time. |
S5.4 Innovative tangible user interface as a mean for interacting telecommunications services
Klemen Peternel, Luka Zebec, Andrej Kos (University of Ljubljana, Slovenia)
While modern telecommunications are ever
more useful and even necessary in
everyday life, not all groups of people
are equally capable of using them. Due
to inevitable demographic changes the
elderly are growing in numbers, yet they
are not very well served by user
interfaces for the various
telecommunications tools. The prime
target group for our proposed technology
is people with cognitive and motor
disabilities, whether due to age,
illness or traumatic events. They
require a user interface which enables
them to make or redirect calls, create
conferences, set forwarding and/or
access different voice XML services -
without the complexity of keyboards or
menus with tree structures. The
motivators behind are: simplicity,
accessibility, usability and efficiency
- all in the scope of potential user
groups and usage scenarios. The key
enablers are Next Generation Network
(NGN) open interfaces and Near Field
Communication (NFC) technology as a part
of Radio Frequency Identification (RFID)
family. |
S6.1 How many standards in a laptop? (and other empirical questions)
Brad Biddle, Andrew White, Sean Woods (Arizona State University, USA)
An empirical study which identifies 251
technical interoperability standards
implemented in a modern laptop computer,
and estimates that the total number of
standards relevant to such a device is
much higher. Of the identified
standards, the authors find that 44%
were developed by consortia, 36% by
formal standards development
organizations, and 20% by single
companies. The intellectual property
rights policies associated with 197 of
the standards are assessed: 75% were
developed under "RAND" terms, 22% under
"royalty free" terms, and 3% utilize a
patent pool. The authors make certain
observations based on their findings,
and identify promising areas for future
research. |
S6.2 A user-centric approach to QoS
regulation in future networks*
Eva Ibarrola1, Fidel Liberal1, Armando Ferro1; Jin Xiao2 (1University of the Basque Country, Spain;
2University of Waterloo, Canada)
The evolution of current networks to
Next Generation Networks (NGNs)
constitutes arguably the most
significant transformation in the
Telecommunication sector in recent
decades. Quality of Service (QoS) is one
of the key aspects in this evolution. In
the NGN environment, networks are
designed to be multiservice, supporting
a wide range of premium services. Each
of these services may have different QoS
requirements which should be established
based on the overall end user's
perception. In this emerging context,
novel QoS policies are required to adapt
the traditional QoS regulatory model to
the new scenario. This paper presents an
approach to identify key factors that
contributes to the development of future
Internet quality of service regulation.
A case study on the application of our
user-centric QoS model to the Internet
QoS regulation in Spain is described.
The results of the study demonstrate the
need for adapting current regulatory
frameworks in order to ensure
competition, pluralism and diversity in
the new network environment. |
S6.3 Competition
and cooperation in the formation of
information technology interoperability
standards: a process model of web
services core standards*
Jai Ganesh (Infosys Technologies Ltd, India)
Standards formation is a key dimension
in the competitive strategy of ICT
firms, as a successful strategy would
result in the emergence of favorable IT
interoperability standards. This paper
examines the standardization efforts of
core Web services standards and the
results indicate that resource
dependencies and strategies adopted by
dominant firms to extend their platforms
influence the standards formation
process. Communities of practice and
standard-setting bodies are leveraged by
dominant firms in the formation and
adoption of standards. We propose a
process model of standard setting
consisting of five intertwined states:
resource pooling, linkages, signaling
and implementation,
institutionalization, and extension. |
S7.1 Performance comparison of intelligent jamming in Rf (physical) LAYER with WLAN Ethernet router and WLAN Ethernet bridge
Rakesh Jha, Upena D. Dalal (Sardar Vallabhbhai National Institute of Technology, India)
The very nature of Radio Frequency (RF)
technology makes Wireless LANs (WLANs)
open to a variety of unique attacks.
Most of these RF-related attacks begin
as exploits of Layer 1 (Physical - PHY)
& Layer 2 (Media Access Control - MAC)
of the 802.11 specification, and then
build into a wide array of more advanced
assaults, including Denial of Service
(DOS) attacks. In Intelligent Jamming
the jammer jammed physical layer of WLAN
by generating continuous high power
noise in the vicinity of wireless
receiver nodes. In this paper, we study
the threats in an Intelligent jamming
Comparison with WLAN Ethernet Router and
WLAN Ethernet Bridge and the security
goals to be achieved. We present and
examine analytical simulation results
for the throughput for different
scenario performance, using the
well-known network simulator OPNET 10.0
and OPNET Modeler 14.5 for WiMAX
Performance. IEEE 802.11b has two
different DCF modes: basic CSMA/CA and
RTS/CTS. Intelligent jamming, which jams
with the knowledge of the protocol, the
jamming describe in our paper is based
on the basis of Fake AP Jamming. When we
have applied same concept in WiMAX
system under the influence of jamming we
have received same effect of router
performance. |
S7.2 Self-organized spectrum chunk selection algorithm for local area LTE-Advanced
Sanjay Kumar1; Yuanye Wang2, Nicola Marchetti2 (1Birla Institute of Technology, India;
2Aalborg University, Denmark)
This paper presents a self organized
spectrum chunk selection algorithm in
order to minimize the mutual intercell
interference among Home Node Bs (HeNBs),
aiming to improve the system throughput
performance compared to the existing
frequency reuse one scheme. The proposed
algorithm is useful in Local Area (LA)
deployment of the Long Term
Evolution-Advanced (LTE-A) systems,
where the HeNBs are expected to be
deployed randomly and without
coordination in distributed manner. The
result shows that the proposed algorithm
effectively improves the system
throughput performance with very limited
signaling exchange among the HeNBs. |
S7.3 On the design of ultra wide band antenna based on fractal geometry
Pranoti Bansode1; Raj Kumar2 (1Defence Institute of Advanced Technology,
2University of Pune, India)
This paper presents ultra wide band
circular fractal antenna. The antenna
has been fed with coplanar waveguide
(CPW) feed. This fractal antenna has
been designed and fabricated on FR4
substrate εr = 4.3 and thickness h =
1.53 mm with initial diameter of solid
circular disc 15 mm. The experimental
result of circular fractal antenna
exhibits the ultra wide band (UWB)
characteristic from 3.295 GHz to 13.365
GHz corresponds 120.88 % impedance
bandwidth. The first resonant frequency
of fractal antenna shifted to 3.75 GHz
in comparison to first resonant
frequency 4.31 GHz of conventional
simple circular disc monopole antenna.
This indicates the size reduction of
antenna. The measured radiation pattern
of this fractal antenna is nearly omni-directional
in azimuth plane throughout the band.
This type of antenna can be useful for
UWB system and sensing applications. |
S7.4 Design of inscribed square circular fractal antenna with adjustable notch-band characteristics
Raj Kumar1; Kailas Sawant2, Jatin Pai2 (1University of Pune,
2Defence Institute of Advanced Technology, India)
This paper presents the design of an
inscribed square circular fractal
antenna with notch having adjustable
frequency characteristics. The position
and width of the notch band can be
adjusted in the entire operating band. A
prototype of the antenna has been
designed on FR4 substrate with εr = 4.3
and thickness h = 1.53 mm with a U-shape
slot in coplanar waveguide feed of
length L = 11 mm and slot width W = 0.4
mm. The experimental result of this
antenna exhibits ultra-wide band
characteristics from frequency 3.1 GHz
to 15.0 GHz. The notch in operating band
helps to reduce the interference with
the frequency bands of Worldwide
Interoperability for microwave access (WiMAX).
The simulated and experimental return
loss are found in good agreement. The
experimental radiation of this antenna
in azimuth plane is nearly omni-directional.
This proposed inscribed square circular
fractal antenna with notch can thus be
used for Ultra wide band (UWB) system,
microwave imaging and precision position
system. |
S7.5 Resonant frequencies of a circularly polarized nearly circular annular ring microstrip antenna with superstrate loading and airgaps
Jayashree Shinde1; Pratap Shinde2, BrajKishor Mishra2; Raj Kumar3; Mahadeo Uplane4 (1Sinhgad Academy of Engineering,
2NMIMS University, 3DAIT University,
4Shivaji University, India)
This paper presents an analysis for the
resonant frequencies and its various
harmonics of a nearly Circular Annular
Ring Microstrip Antenna (ARMSA) with and
without air gaps and superstrate
loadings. This ARMSA is studied for
various radii of the inner and outer
radiating circular edges of disc. Three
such nearly circular ARMSA are analyzed
with an Aspect Ratio of 0.98. By
diagonal feeding at the center of ARMSA,
circular polarizations are observed with
generation of fundamental resonant
frequency and higher order modes.
Multilayer dielectric ARMSA with and
without air gaps are analyzed using
effective quasi-static capacitance
approach and compared with experimental
results using Vector Network Analyzer to
provide less than 1% deviation in the
resonant frequency. Also the full wave
simulated and experimental readings go
in good agreement for all the three
nearly circular ARMSA for with and
without air gaps along with superstrate
loadings of various height and
dielectric constant material as cover.
This closed form model of nearly
circular ARMSA is suitable for covered
antenna devices CAD and is directly
applicable for integration of microstrip
antennas beneath protective dielectric
superstrates in portable wireless
equipments. |
S8.1 A scheme for disaster recovery in wireless networks with dynamic ad-hoc routing
Guowei Chen, Aixian Hu, Takuro Sato (Waseda University, Japan)
This paper proposes a hybrid network
scheme combining ad-hoc networks into
cellular networks. The scheme is aimed
to help the networks to recover to
service as much as possible after a
disaster strike, by maintaining the
connection between Base Stations (BSs)
and nodes via multi-hopping, where if a
node cannot connect to a BS directly, it
switches its working mode from cellular
mode to ad-hoc mode. A location-based
routing protocol has been proposed for
building a route from the node to the
BS. Simulation results shows that even
only a small part of the nodes can
directly connect to a BS, most of the
nodes can find a route to a BS via
multi-hopping. And it is found that it
outperforms a previously proposed
solution which is via beaconing in terms
of resistance to mobility. |
S8.2 A new study on network performance under link failure in OPS/OBS high-capacity optical networks
Felipe Rudge Barbosa, Indayara Martins, Edson Moschim (State University of Campinas - Unicamp, Brazil)
In this work we analyze the performance
and sensitivity to link failure of
metropolitan networks based on the
technology of optical packet/burst
switching (OPS/OBS). We use ring and
mesh topologies to evaluate through
analytical modeling and computer
simulations the impact of link failure
on each topology. We adopt the
parameters average number of hops and
packet loss fraction to evaluate network
performance. It is observed that mesh
topologies with triple connection node
configuration (3x3) are more robust;
consequently in case of link failure the
impact of lost data is minimum compared
with the other topologies and
configurations considered. |
S8.3 Business scheme for shifting from existing networks to trusted green networks
Yoshitoshi Murata (Iwate Prefectural University, Japan)
The future networks have yet to be
defined. These are not represented by
the next generation of the Internet and
they need to satisfy requirements for
the sustainability of mankind. These are
called Trusted Green Networks (TGNs) in
this paper. Although TGNs offer
marvellous concepts and excellent
functions, they will not always be
widely deployed. There have been several
initiatives to develop future networks.
Their purpose is developing innovative
technologies, but not including
deployment schemes. We selected
"sustainability", "trust and security",
and "solving the digital divide by
location" as concepts underlying TGNs
and clarified their requirements. A
business scheme is also proposed that
boosts the shift from existing networks
to TGNs. And the network layer model of
TGNs is introduced. |
S8.4 Innovative ad-hoc wireless sensor networks to significantly reduce leakages in underground water infrastructures
Daniele Trinchero1, Riccardo Stefanelli1, Luca Cisoni1; Abdullah Kadri2, Adnan Abu-Dayya2, Mazen Omar Hasna2, Tamer Khattab2 (1Politecnico di Torino, Italy;
2Qatar University, Qatar)
This paper presents an ICT solution to
overcome the problem of water dispersion
in water distribution networks. Leakage
prevention and breaks identification in
water distribution networks are
fundamental for an adequate use of
natural resources. Nowadays, all over
the world, water wasting along the
distribution path reaches untenable
percentages (up to 80 % in some
regions). Since the pipes are buried
within the terrain, typically only
relevant breaks are considered for
restorations: excavations are very
expensive and consequently the costs to
identify the position of the leakage or
just the position of the pipe itself are
too high. To address this problem, and
simplify the leakage identification
process, the authors have designed a
wireless network system making use of
mobile wireless sensors able to detect
breaks and reveal unknown tracks and
monitor the pressure spectrum of the
fluid flowing in the pipe. The sensors
transmit the acquired data from the
terrain to the surface by use of a
wireless connection. On the surface
ground there are stations that receive
the signal, process it, and communicate
with a central unit where necessary
intelligent signal processing techniques
are used to detect leakage sources.
Compared to other leakage detection
solutions already available in the
market (such as: Ground penetrating
radar (GPR), pure acoustic techniques
and tracer gases), the proposed
technique appears very efficient and
much more inexpensive. |
� TOP �
|
� MAIN PROGRAMME � |
|
|
|