Wednesday, March 9, 2022

Microprocessor and Microcontroller 2021_Supply, 2020 and 2017_Question papers

 https://drive.google.com/file/d/17VDH1UuFp98tn_ekaE-7hJC0_bKEqUkh/view?usp=sharing

https://drive.google.com/file/d/1tovsGt8I2DWVJ1jab68goXkGFrL1ZzfU/view?usp=sharing

https://docs.google.com/document/d/1RdEpww-qxFxXhb52n7xaZGkT1-rtGY78/edit?usp=sharing&ouid=118195087646814206783&rtpof=true&sd=true

Wednesday, February 5, 2020

IOT

 

COURSE FILE CONTENTS

Sl.No.

Topic

Page no.

1.

Department Vision and Mission

3

2.

Course Description

3

3.

Course Overview

4

4.

Course Pre-requisites

4

5.

Marks Distribution

4

6.

POs and PSOs

4

7.

Course outcomes (COs)

5

8.

CO mapping with POs and PSOs

5

9.

Syllabus, Textbooks and Reference Books

6

10.

Gaps in syllabus

7

11.

Course plan/Lesson Plan

7

12.

Lecture Notes

9

 

Unit-I

Introduction to Internet of Things

10

 

Unit-II

Internet Principles and communication technology

54

 

Unit-III

Prototyping and programming for IoT

70

 

Unit-IV

Cloud computing and Data analytics

95

 

Unit-V

IoT Product Manufacturing –From prototype to reality

113

13.

Unit wise Question Bank

 

 

a.

Short answer questions

 

b.

Long answer questions

 

14.

Previous University Question papers

 

15.

Unit wise Assignment Questions

 

16.

Internal Question Papers with Key

 

17.

Content Beyond Syllabus

 

18.

Methodology used to identify Weak and bright students

 

·         Support extended to weak students

 

·         Efforts to engage bright students

 

 

 

 

CERTIFICATE

 

 I, the undersigned, have completed the course allotted to me as shown below,

Sl.No.

Semester

Name of the Subject

Course ID

Total Units

1

VIII

Internet of Things

OE 773 EC

5

 

 

 

 

 

 

Date:                                                                                       Prepared by

 

Academic Year: 2020-21                                      1. Mr. V. Karunakar Reddy

 

                                                                                               

 

 

Verifying authority:

1.     Head of the Department: …………………………………..

2.      

3.      

 

 

 

 

 

PRINCIPAL

         

 

 

 

 

 

 

 

MATRUSRI ENGINEERING COLLEGE

                                     Saidabad, Hyderabad-500 059.

(Approved by AICTE & Affiliated to Osmania University)

      

          ELECTRONICS AND COMMUNICATION ENGINEERING

 

DEPARTMENT VISION

 

To become a reputed centre of learning in Electronics and Communication and transform the students into accomplished professionals

 

DEPARTMENT MISSION

M1: To provide the learning ambience to nurture the young minds with theoretical and practical knowledge to produce employable and competent engineers.

 

M2: To provide a strong foundation in fundamentals of electronics and communication engineering to make students explore advances in research for higher learning.


M3: To inculcate awareness for societal needs, continuous learning and professional practices.

    M4: To imbibe team spirit and leadership qualities among students.

 

COURSE DESCRIPTOR

 

Course Title

Internet of Things

Course Code

OE 773 EC

Program

  B.E.

Semester

 

VII

Course Type

Open Elective-II

Regulation

R-19

Course Structure

Theory

Practical

Lectures

Tutorials

Credits

Laboratory

Credits

35

0

3

 

 

Course Faculty

Mr. V. Karunakar Reddy

 

 

 

 

 

 

I.             COURSE OVERVIEW:

 

 

After completing this course, the student will be able to know the various applications of IoT and other enabling technologies and comprehend various protocols and communication technologies used in IoT. This course is helpful to design simple IoT systems with requisite hardware and Embedded C programming and understand the relevance of cloud computing and data analytics to IoT. This course also comprehends the business model of IoT from developing a prototype to launching a product.

 

II.          COURSE PRE-REQUISITES:

 

Level

Course Code

Semester

Prerequisites

Credits

UG

PE 672 EC

VI

Data Communication Computer Network

3

UG

PC 701 EC

VII

Embedded Systems

3

 

 

III.       MARKS DISTRIBUTION:

 

 

Subject

SEE Examination

CIA

Examination

 

 

 

Total Marks

Internet of Things

70

30

100

 

IV.        PROGRAM OUTCOMES (POs):

 

The students will be able to:

PO1

Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals, and an engineering specialization to the solution of complex engineering problems.

PO2

Problem analysis: Identify, formulate, review research literature, and analyze complex engineering problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences.

PO3

Design/development of solutions: Design solutions for complex engineering problems and design system components or processes that meet the specified needs with appropriate consideration for the public health and safety, and the cultural, societal, and environmental considerations.

PO4

Conduct investigations of complex problems: Use research-based knowledge and research methods including design of experiments, analysis and interpretation of  data, and synthesis of the information to provide valid conclusions.

PO5

Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering and IT tools including prediction and modeling to complex engineering activities with an understanding of the limitations.

PO6

The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice.

PO7

Environment and sustainability: Understand the impact of the professional engineering solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable development.

PO8

Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the engineering practice.

PO9

Individual and team work: Function effectively as an individual, and as a member or leader in diverse teams, and in multidisciplinary settings.

PO10

Communication: Communicate effectively on complex engineering activities with the engineering community and with society at large, such as, being able to comprehend and write effective reports and design documentation, make effective presentations, and give and receive clear instructions.

PO11

Project management and finance: Demonstrate knowledge and understanding of the engineering and management principles and apply these to one’s own work, as a member and leader in a team, to manage projects and in multidisciplinary environments.

PO12

Life-long learning: Recognize the need for, and have the preparation and ability to engage in independent and life- long learning in the broadest context of  technological change.

 

 

V.           PROGRAM SPECIFIC OUTCOMES (PSOs):

 

The students will be able to:

PSO1

Professional Competence: Apply the knowledge of Electronics Communication
Engineering principles in different domains like VLSI, Signal Processing,
Communication, Embedded Systems.

PSO2

Technical Skills: Able to design and implement products using the state of art Hardware
and Software tools and hence provide simple solutions to complex problems

VI.        COURSE OUTCOMES (COs):

 

The course should enable the students to:

CO1

Discuss fundamentals of IoT and its applications and requisite infrastructure

CO2

Describe Internet principles and communication technologies relevant to IoT

CO3

Discuss hardware and software aspects of designing an IoT system

CO4

Describe concepts of cloud computing and Data Analytics

CO5

Discuss business models and manufacturing strategies of IoT products

 

VII.           MAPPING COURSE OUTCOMES (COs) with POs and PSOs:

(3 = High; 2 = Medium; 1 = Low  )

COs

POs

PSOs

PO1

PO2

PO3

PO4

PO5

PO6

PO7

PO8

PO9

PO10

PO11

PO12

PSO1

PSO2

CO1

2

2

2

2

-

-

-

-

-

-

-

-

2

1

CO2

2

-

2

-

-

1

-

-

-

-

-

-

2

2

CO3

2

-

2

2

2

-

-

-

-

-

-

1

2

2

CO4

2

2

2

2

2

1

-

-

-

-

-

1

2

2

CO5

2

2

2

2

2

1

-

-

-

-

-

1

1

1

 

 

 

 

 

 

VIII.     SYLLABUS :

 

UNIT I- Introduction to Internet of Things

No.of Hrs

IoT vision, strategic research and innovation directions, IoT Applications, Related future technologies, Infrastructure, Networks and Communication, Processes, Data Management, Security, Device level Energy issues.

06

UNIT II- Internet Principles and communication technology

 

Internet Communications: An Overview – IP, TCP, IP protocol Suite, UDP. IP addresses – DNS, Static and Dynamic IP addresses, MAC Addresses, TCP and
UDP Ports, Application Layer Protocols – HTTP, HTTPS, Cost Vs Ease of Production, Prototypes and Production, Open Source Vs Closed Source.

 

 

06

 

UNIT III- Prototyping and programming for IoT

 

Prototyping Embedded Devices – Sensors, Actuators, Microcontrollers, SoC, Choosing a platform, Prototyping Hardware platforms – Arduino, Raspberry Pi. Prototyping the physical design – Laser Cutting, 3D printing, CNC Milling.

Techniques for writing embedded C code: Integer Data types in C, Manipulating bits –AND, OR, XOR, NOT, Reading and writing from I/O ports. Simple Embedded c programs for LED blinking, control of motor using switch and temperature sensor for Arduino board.

 

08

UNIT IV- Cloud computing and Data analytics

 

Introduction to Cloud storage models- SAAS, PAAS, and IAAS. Communication APIs, Amazon web services for IoT, Skynet IoT Messaging Platform.

Introduction to Data Analytics for IoT-Apache Hadoop- map reduce job execution workflow.

07

UNIT V- IoT Product Manufacturing – From prototype to reality

 

Business model for IoT product manufacturing, Business models canvas, Funding an IoT Start-up, Mass manufacturing-Designing kits, Designing PCB, 3D printing, Certification, Scaling up Software, Ethical issues in IoT-Privacy, control, Environment, Solutions to Ethical issues.

08

 

 

 

TEXT BOOKS:

1.

Internet of Things - Converging Technologies for smart environments and Integrated ecosystems, River Publishers.

2.

Designing the Internet of Things, Adrian McEwen (Author), Hakim Cassimally. Wiley India Publishers.

3.

Fundamentals of Embedded software : where C meets assembly by Daneil W Iewies, Pearson 

4.

Internet of things -A hands on Approach, Arshdeep Bahga, Universities press.

REFERENCES:

 

1

Internet of Things (A Hands-On-Approach), Vijay Madisetti, ArshdeepBahga, VPT Publisher, 1stEdition, 2014.

 

 

IX.        GAPS IN THE SYLLABUS - TO MEET INDUSTRY / PROFESSION REQUIREMENTS:

 

 

S No

 

Description

Proposed Actions

Relevance With POs

Relevance With PSOs

1

Communication API

Interfacing

P10

PSO1

2

Python for Deep Learning

Software

P11

PSO1

3

ARDUINO

Project

P11

PSO2

 

X.           COURSE PLAN/ LECTURE PLAN:

 

         

Lecture

No.

Topics to be covered

PPT/BB/

OHP/

e-material

No.

of Hrs

Relevant

COs

Text Book/Reference Book

1

Introduction to Internet of Things- IoT vision

PPT

1

 

1

1

2

strategic research and innovation directions

PPT

1

 

1

1

3

IoT Applications

PPT

1

 

1

1

4

Related future technologies, Infrastructure

PPT

1

 

1

1

5

Networks and Communication, Processes, Data Management

PPT

1

 

1

1

6

Security, Device level Energy issues.

PPT

1

 

1

1

7

Internet Principles and communication technology

PPT

1

 

2

2

8

Internet Communications: An Overview – IP, TCP, IP protocol Suite

PPT

1

2

 

2

9

UDP. IP addresses – DNS, Static and Dynamic IP addresses

PPT

1

2

2

10

MAC Addresses, TCP and
UDP Ports

PPT

1

2

2

11

Application Layer Protocols – HTTP, HTTPS, Cost Vs Ease of Production,

PPT

1

2

2

12

Prototypes and Production, Open Source Vs Closed Source

PPT

1

2

2

13

Prototyping and programming for IoT- Embedded Devices- Sensors, Actuators, Microcontrollers, SoC

PPT

1

3

2

14

Choosing a platform, Prototyping Hardware platforms – Arduino, Raspberry Pi

PPT

1

3

2

15

Prototyping the physical design – Laser Cutting, 3D printing, CNC Milling.

PPT

2

3

2

16

Techniques for writing embedded C code: Integer Data types in C

PPT

1

3

3

17

Manipulating bits –AND, OR, XOR, NOT

BB

1

3

3

18

Reading and writing from I/O ports.

BB

1

3

3

19

Simple Embedded c programs for LED blinking

BB

1

3

3

20

Control of motor using switch and temperature sensor for Arduino board.

PPT

1

3

3

21

Introduction to Cloud storage models- SAAS, PAAS, and IAAS.

PPT

1

4

4

22

Communication APIs

PPT

1

4

4

23

Amazon web services for IoT

PPT

1

4

4

24

Skynet IoT Messaging Platform.

PPT

1

4

4

25

Introduction to Data Analytics for IoT

PPT

1

4

4

26

Apache Hadoop

PPT

1

4

4

27

Map reduce job execution workflow

PPT

1

4

4

28

Business model for IoT product manufacturing

PPT

1

5

2

29

Business models canvas

PPT

1

5

2

30

Funding an IoT Start-up

PPT

1

5

2

31

Mass manufacturing-Designing kits, Designing PCB, 3D printing

PPT

1

5

2

32

Certification, Scaling up Software

PPT

1

5

2

33

Ethical issues in IoT-Privacy, control, Environment,

PPT

1

5

2

34

Solutions to Ethical issues.

PPT

1

5

2

35

Solutions to Ethical issues. Revision

PPT

1

5

2

1

Tutorial

 

 

 

 

2

Tutorial

 

 

 

 

3

Tutorial

 

 

 

 

4

Tutorial

 

 

 

 

5

Tutorial

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

LECTURE NOTES

 

 

 

 

 

 

 

 

 

 

 

 

UNIT-I:

Internet of Things (IoT) is a concept and a paradigm that considers pervasive presence in the environment of a variety of things/objects that through wireless and wired connections and unique addressing schemes are able to interact with each other and cooperate with other things/objects to create new applications/services and reach common goals. In this context the research and development challenges to create a smart world are enormous. A world where the real, digital and the virtual are converging to create smart environments that make energy, transport, cities and many other areas more intelligent.

The goal of the Internet of Things is to enable things to be connected anytime, anyplace, with anything and anyone ideally using any path/network and any service.

Internet of Things is a new revolution of the Internet. Objects make themselves recognizable and they obtain intelligence by making or enabling context related decisions thanks to the fact that they can communicate information about themselves. They can access information that has been aggregated by other things, or they can be components of complex services. This transformation is concomitant with the emergence of cloud computing capabilities and the transition of the Internet towards IPv6 with an almost unlimited addressing capacity.

New types of applications can involve the electric vehicle and the smart house, in which appliances and services that provide notifications, security, energy-saving, automation, telecommunication, computers and entertainment are integrated into a single ecosystem with a shared user interface. Obviously, not everything will be in place straight away. Developing the technology in Europe right now—demonstrating, testing and deploying products it will be much nearer to implementing smart environments by 2020. In the future computation, storage and communication services will be highly pervasive and distributed: people, smart objects, machines, platforms and the surrounding space (e.g., with wireless/wired sensors, M2M devices, RFID tags, etc.) will create a highly decentralized common pool of resources (up to the very edge of the “network”) interconnected by a dynamic network of networks. The “communication language” will be based on interoperable protocols, operating in heterogeneous environments and platforms. IoT in this context is a generic term and all objects can play an active role thanks to their connection to the Internet by creating smart environments, where the role of the Internet has changed. This powerful communication tool is providing access to information, media and services, through wired and wireless broadband connections. The Internet of Things makes use of synergies that are generated

by the convergence of Consumer, Business and Industrial Internet. The convergence creates the open, global network connecting people, data, and things. This convergence leverages the cloud to connect intelligent things that sense and transmit a broad array of data, helping creating services that would not be obvious without this level of connectivity and analytical intelligence. The use of platforms is being driven by transformative technologies such as cloud, things, and mobile. The cloud enables a global infrastructure to generate new services, allowing anyone to create content and applications for global users. Networks of things connect things globally and maintain their identity online. Mobile allows connection to this global infrastructure anytime, anywhere. The result is a globally accessible network of things, users, and consumers, who are available to create businesses, contribute Content, generate and purchase new services.

Platforms also rely on the power of network effects, as they allow more things, they become more valuable to the other things and to users that make use of the services generated. The success of a platform strategy for IoT can be determined by connection, attractiveness and knowledge/information/ data flow.

Enabling technologies for the Internet of Things such as sensor networks, RFID, M2M, mobile Internet, semantic data integration, semantic search, IPv6, etc. are considered in and can be grouped into three categories:

(i) Technologies that enable “things” to acquire contextual information.

(ii) Technologies that enable “things” to process contextual information.

(iii) Technologies to improve security and privacy.

The first two categories can be jointly understood as functional building blocks required building “intelligence” into “things”, which are indeed the features that differentiate the

IoT from the usual Internet. The third category is not a functional but rather a de facto requirement, without which the penetration of the IoT would be severely reduced. Internet of Things developments implies that the environments, cities, buildings, vehicles, clothing, portable devices and other objects have more and more information associated with them and/or the ability to sense, communicate, network and produce new information. In addition we can also include non-sensing things (i.e. things that may have functionality, but do not provide information or data). All the computers connected to the Internet can talk to each other and with the connection of mobile phones it has now become mobile.

With the Internet of Things the communication is extended via Internet to all the things that surround us. The Internet of Things is much more thanM2M communication, wireless sensor networks, 2G/3G/4G, RFID, etc. These are considered as being the enabling technologies that make “Internet of Things” applications possible.

In the wireless and wired technologies convergence network neutrality is an essential element where no bit of information should be prioritized over another so the principle of connecting anything from/to anybody located anywhere at any-time using the most appropriate physical path from any-path available between the sender and the recipient is applied in practice. For respecting these principles, Internet service providers and governments need to treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, and modes of communication.

 

Internet of Things Common Definition

Ten “critical” trends and technologies impacting IT for the next five years were laid out by Gartner in 2012 and among them the Internet of Things, which will benefit from cheap, small devices allowing that everything will have a radio and location capability. Self-assembling mesh networks, location aware services will be provided. Common multi-service IP

network supporting a wide range of applications and services.

The use of IP to communicate with and control small devices and sensors opens the way for the convergence of large, IT-oriented networks with real time and specialized networked applications.

                                                Fig: IP convergence

 

Currently, the IoT is made up of a loose collection of disparate, purpose built networks, which are mostly not inter-connected. Today’s vehicles, for example, have multiple networks to control engine function, safety features, communications systems, and so on.

Commercial and residential buildings also have various control systems for heating, venting, and air conditioning (HVAC); telephone service; security; and lighting.

As the IoT evolves, these networks, and many others, will be connected with added security, analytics, and management capabilities and some of them will converge. This will allow the IoT to become even more powerful in what it can help people achieve.

 

                        Fig: IoT viewed as a network of networks

 

A presentation of IoT as a network of networks is given in above figure. The Internet of Things is not a single technology; it’s a concept in which most new things are connected and enabled such as street lights being networked and things like embedded sensors, image recognition functionality, augmented reality, and near field communication are integrated into situational decision support, asset management and new services. These bring many businesses opportunities and add to the complexity of IT.

Distribution, transportation, logistics, reverse logistics, field service, etc. are areas where the coupling of information and “things” may create new business processes or may make the existing ones highly efficient and more profitable.

The Internet of Things provides solutions based on the integration of information technology, which refers to hardware and software used to store, retrieve, and process data and communications technology which includes electronic systems used for communication between individuals or groups. The rapid convergence of information and communications technology is taking place at three layers of technology innovation: the cloud, data and

Communication pipes/networks, and device.

The synergy of the access and potential data exchange opens huge new possibilities for IoT applications. Already over 50% of Internet connections are between or with things. In 2011 there were over 15 billion things on the Web, with 50 billion+ intermittent connections.

By 2020, over 30 billion connected things, with over 200 billion with intermittent

Connections are forecast. Key technologies here include embedded sensors, image recognition and NFC. By 2015, in more than 70% of enterprises, a single executable will oversee all Internet connected things. This becomes the Internet of Everything.

As a result of this convergence, the IoT applications require that classical industries are adapting and the technology will create opportunities for new industries to emerge and to deliver enriched and new user experiences and services. In addition, to be able to handle the sheer number of things and objects that will be connected in the IoT, cognitive technologies and contextual intelligence are crucial. This also applies for the development of context aware applications that need to be reaching to the edges of the network through smart devices

that are incorporated into our everyday life.

The Internet is not only a network of computers, but it has evolved into a network of devices of all types and sizes, vehicles, smart phones, home appliances, toys, cameras, medical instruments and industrial systems, all connected, all communicating and sharing information all the time as presented in below Figure.

 

                                                            Fig:  Internet of everything

 

The Internet of Things had until recently different means at different levels of abstractions through the value chain, from lower level semiconductor through the service providers.

The Internet of Things is a “global concept” and requires a common definition. Considering the wide background and required technologies, from sensing device, communication subsystem, data aggregation and pre-processing to the object instantiation and finally service provision, generating an unambiguous definition of the “Internet of Things” is non-trivial.

The IERC is actively involved in ITU-T Study Group 13, which leads the work of the International Telecommunications Union (ITU) on standards for next generation networks (NGN) and future networks and has been part of the team which has formulated the following definition “Internet of things (IoT): A global infrastructure for the information society, enabling advanced services by interconnecting (physical and virtual) things based on existing

and evolving interoperable information and communication technologies.

 

Fig: Factors driving the convergence and contributing to the integration and transformation of cloud, pipe, and device technologies.

 

IoT Strategic Research and Innovation Directions:

 

The development of enabling technologies such as nanoelectronics, communications, sensors, smart phones, embedded systems, cloud networking, network virtualization and software will be essential to provide to things the capability to be connected all the time everywhere. This will also support important future IoT product innovations affecting many different industrial sectors. Some of these technologies such as embedded or cyber-physical systems form the edges of the “Internet of Things” bridging the gap between cyber space and the physical world of real “things”, and are crucial in enabling the “Internet of Things” to deliver on its vision and become part of bigger systems in a world of “systems of systems”. An example of technology convergence is presented in below Figure.

 

                                                Fig: Technology convergence

 

The final report of the Key Enabling Technologies (KET), of the High- Level Expert Group identified the enabling technologies, crucial to many of the existing and future value chains of the European economy:

• Nanotechnologies

• Micro and Nano electronics

• Photonics

• Biotechnology

• Advanced Materials

• Advanced Manufacturing Systems.

IoT creates intelligent applications that are based on the supporting KETs identified, as IoT applications address smart environments either physical or at cyber-space level, and in real time. To this list of key enablers, we can add the global deployment of IPv6 across the World enabling a global and ubiquitous addressing of any communicating smart thing.

From a technology perspective, the continuous increase in the integration density proposed by Moore’s Law was made possible by a dimensional scaling: in reducing the critical dimensions while keeping the electrical field constant, one obtained at the same time a higher speed and a reduced power consumption of a digital MOS circuit: these two parameters became driving forces of the microelectronics industry along with the integration density.

The International Technology Roadmap for Semiconductors has emphasized in its early editions the “miniaturization” and its associated benefits in terms of performances, the traditional parameters in Moore’s Law. This trend for increased performances will continue, while performance can always be traded against power depending on the individual application, sustained by the incorporation into devices of new materials, and the application of new transistor concepts. This direction for further progress is labeled “More Moore”.

The second trend is characterized by functional diversification of semiconductor-based devices. These non-digital functionalities do contribute to the miniaturization of electronic systems, although they do not necessarily scale at the same rate as the one that describes the development of digital functionality. Consequently, in view of added functionality, this trend may be designated “More-than-Moore”.

Mobile data traffic is projected to double each year between now and 2015 and mobile operators will find it increasingly difficult to provide the bandwidth requested by customers. In many countries there is no additional spectrum that can be assigned and the spectral efficiency of mobile networks is reaching its physical limits. Proposed solutions are the seamless integration of existing Wi-Fi networks into the mobile ecosystem. This will have a direct impact on Internet of Things ecosystems.

The chips designed to accomplish this integration are known as “multicom” chips. Wi-Fi and baseband communications are expected to converge in three steps:

• 3G—the applications running on the mobile device decide which data are handled via 3G net work and which are routed over the Wi-Fi network.

• LTE release eight — calls for seamless movement of all IP traffic between 3G and Wi-Fi connections.

• LTE release ten—traffic is supposed to be routed simultaneously over 3G and Wi-Fi networks.

To allow for such seamless handovers between network types, the architecture of mobile devices is likely to change and the baseband chip is expected to take control of the routing so the connectivity components are connected to the baseband or integrated in a single silicon package. As a result of this architecture change, an increasing share of the integration work is likely done by baseband manufacturers (ultra –low power solutions) rather than by handset producers.

The market for wireless communications is one of the fastest-growing segments in the integrated circuit industry. Breathtakingly fast innovation, rapid changes in communications standards, the entry of new players, and the evolution of new market sub segments will lead to disruptions across the industry. LTE and multicom solutions increase the pressure for industry consolidation, while the choice between the ARM and x86 architectures forces players to make big bets that may or may not pay off.

Integrated networking, information processing, sensing and actuation capabilities allow physical devices to operate in changing environments. Tightly coupled cyber and physical systems that exhibit high level of integrated intelligence are referred to as cyber-physical systems. These systems are part of the enabling technologies for Internet of Things applications where computational and physical processes of such systems are tightly interconnected and coordinated to work together effectively, with or without the humans in

the loop. An example of enabling technologies for the Internet of Things is presented in below figure. Robots, intelligent buildings, implantable medical devices, vehicles that drive themselves or planes that automatically fly in a controlled airspace, are examples of cyber-physical systems that could be part of Internet of Things ecosystems.

 

 

Fig: Internet of Things – enabling technologies

 

The IoT Strategic Research and Innovation Agenda covers in a logical manner the vision, the technological trends, the applications, the technology enablers, the research agenda, timelines, priorities, and finally summaries in two tables the future technological developments and research needs. Advances in embedded sensors, processing and wireless connectivity are bringing the power of the digital world to objects and places in the physical

world. IoT Strategic Research and Innovation Agenda is aligned with the findings of the 2011 Hype Cycle developed by Gartner [24], which includes the broad trend of the Internet of Things (called the “real-world Web” in earlier Gartner research.

The field of the Internet of Things is based on the paradigm of supporting the IP protocol to all edges of the Internet and on the fact that at the edge of the network many (very) small devices are still unable to support IP protocol stacks. This means that solutions centered on minimum Internet of Things devices are considered as an additional Internet of Things paradigm without IP to all access edges, due to their importance for the development of the field.

 

Applications and Scenarios of Relevance The IERC vision is that “the major objectives for IoT are the creation of smart environments/spaces and self-aware things (for example: smart transport, products, cities, buildings, rural areas, energy, health, living, etc.) for climate, food, energy, mobility, digital society and health applications”, shown in below Figures.

The outlook for the future is the emerging of a network of interconnected uniquely identifiable objects and their virtual representations in an Internet a like structure that is positioned over a network of interconnected computers allowing for the creation of a new platform for economic growth.

Smart is the new green as defined by Frost & Sullivan and the green products and services will be replaced by smart products and services. Smart products have a real business case, can typically provide energy and efficiency savings of up to 30 per cent, and generally deliver a two- to three-year return on investment.

Fig:  Internet of Things — smart environments and smart spaces creation.

 

This trend will help the deployment of Internet of Things applications and the creation of smart environments and spaces.

At the city level, the integration of technology and quicker data analysis will lead to a more coordinated and effective civil response to security and safety (law enforcement and blue light services); higher demand for outsourcing security capabilities.

At the building level, security technology will be integrated into systems and deliver a return on investment to the end-user through leveraging the technology in multiple applications (HR and time and attendance, customer behavior in retail applications etc.).

There will be an increase in the development of “Smart” vehicles which have low (and possibly zero) emissions. They will also be connected to infrastructure. Additionally, auto manufacturers will adopt more use of “Smart” materials.

Intelligent packaging will be a “green” solution in its own right, reducing food waste. Intelligent materials will be used to create more comfortable clothing fabrics. Phase-change materials will help regulate temperatures in buildings, reducing energy demand for heating and cooling. Increasing investment in research and development, alliances with scientific bodies and value creation with IP & product line will lead to replacement of synthetic additives by natural ingredients and formulation of fortified & enriched foods in convenient

and tasty formats. Local sourcing of ingredients will become more common as the importance of what consumers eat increases. Revealing the carbon foot prints of foods will be a focus in the future.

 

Fig. Internet of Things in the context of smart environments and applications

 

The key focus will be to make the city smarter by optimizing resources, feeding its inhabitants by urban farming, reducing traffic congestion, providing more services to allow for faster travel between home and various destinations, and increasing accessibility for essential services. It will become essential to have intelligent security systems to be implemented at key junctions in the city. Various types of sensors will have to be used to make this a reality. Sensors are moving from “smart” to “intelligent”. Biometrics is expected to be integrated with CCTV at highly sensitive locations around the city. National identification cards will also become an essential tool for the identification of an individual. In addition, smart cities in 2020 will require real time auto identification security systems.

 

Fig.  Smart world illustration.

 

A range of smart products and concepts will significantly impact the power sector. For instance, sensors in the home will control lights, turning them off periodically when there is no movement in the room. Home Area Networks will enable utilities or individuals to control when appliances are used, resulting in a greater ability for the consumer to determine when they want to use electricity, and at what price. This is expected to equalize the need for peak

power, and spread the load more evenly over time. The reduction in the need for peaking power plant capacity will help delay investment for utilities. Pattern recognizing smart meters will both help to store electricity, and pre-empt usual consumption patterns within the home. All appliances will be used as electricity storage facilities, as well as users of it. Storm water management and smart grid water will see growth.

Waste water treatment plants will evolve into bio-refineries. New, innovative wastewater treatment processes will enable water recovery to help close the growing gap between water supply and demand. Self-sensing controls and devices will mark new innovations in the Building Technologies space. Customers will demand more automated, self-controlled solutions with built in fault detection and diagnostic capabilities. Development of smart implantable chips that can monitor and report individual health status periodically will see rapid growth. Smart pumps and smart appliances/devices are expected to be significant contributors towards efficiency improvement. Process equipment within built “smartness” to self-assess and generate reports on their performance, enabling efficient asset management, will be adopted. In the future batteries will recharge from radio signals, cell phones will recharge fromWi-Fi. Smaller Cells (micro, pico, femto) will result in more cell sites with less distance apart but they will be greener, provide power/cost savings and at the same time, higher through put. Connected homes will enable consumers to manage their energy, media, security and appliances; will be part of the IoT applications in the future.

Test and measurement equipment is expected to become smarter in the future in response to the demand for modular instruments having lower power consumption. Furthermore, electronics manufacturing factories will become more sustainable with renewable energy and sell unused energy back to the grid, improved water conservation with rain harvesting and implement other smart building technologies, thus making their sites “Intelligent Manufacturing Facilities”.

General Electric Co. considers that this is taking place through the convergence of the global industrial system with the power of advanced computing, analytics, low-cost sensing and new levels of connectivity permitted by the Internet. The deeper meshing of the digital world with the world of machines holds the potential to bring about profound transformation to global industry, and in turn to many aspects of daily life [15]. The Industrial Internet starts with embedding sensors and other advanced instrumentation in an array of machines from the simple to the highly complex, as seen in below figure. This allows the collection and analysis of an enormous amount of data, which can be used to improve machine performance, and inevitably the efficiency of the systems and networks that link them. Even the data itself can become “intelligent,” instantly knowing which users it needs to reach.

 

Fig. Industrial internet applications

 

The new concept of Internet of Energy requires web based architectures to readily guarantee information delivery on demand and to change the traditional power system into a networked Smart Grid that is largely automated, by applying greater intelligence to operate, enforce policies, monitor and self-heal when necessary. This requires the integration and interfacing

of the power grid to the network of data represented by the Internet, embracing energy generation, transmission, delivery, substations, distribution control, metering and billing, diagnostics, and information systems to work seamlessly and consistently.

This concept would enable the ability to produce, store and efficiently use energy, while balancing the supply/demand by using a cognitive Internet of Energy that harmonizes the energy grid by processing the data, information and knowledge via the Internet. In fact, as seen in below figure, the Internet of Energy will leverage on the information highway provided by the Internet to link computers, devices and services with the distributed smart energy grid that is the freight highway for renewable energy resources allowing stakeholders

to invest in green technologies and sell excess energy back to the utility. The Internet of Energy applications are connected through the Future Internet and Internet of Things enabling seamless and secure interactions and cooperation of intelligent embedded systems over heterogeneous communication infrastructures.

 

 

 

Fig. Internet of Things embedded in internet of energy applications

 

It is expected that this “development of smart entities will encourage development of the novel technologies needed to address the emerging challenges of public health, aging population, environmental protection and climate change, conservation of energy and scarce materials, enhancements to safety and security and the continuation and growth of economic prosperity.” The IoT applications are further linked with Green ICT, as the IoT will drive

energy-efficient applications such as smart grid, connected electric cars, energy-efficient buildings, thus eventually helping in building green intelligent cities.

 

IoT Applications

 

Smart Cities

By 2020 we will see the development of Mega city corridors and networked, integrated and branded cities. With more than 60 percent of the world population expected to live in urban cities by 2025, urbanization as a trend will have diverging impacts and influences on future personal lives and mobility. Rapid expansion of city borders, driven by increase in population and infrastructure development, would force city borders to expand outward and engulf the surrounding daughter cities to form mega cities, each with a population of more than 10 million. By 2023, there will be 30 mega cities globally, with 55 percent in developing economies of India, China, Russia and Latin America.

This will lead to the evolution of smart cities with eight smart features, including Smart Economy, Smart Buildings, Smart Mobility, Smart Energy, Smart Information Communication and Technology, Smart Planning, Smart Citizen and Smart Governance. There will be about 40 smart cities globally by 2025.

The role of the cities governments will be crucial for IoT deployment. Running of the day-to-day city operations and creation of city development strategies will drive the use of the IoT. Therefore, cities and their services represent an almost ideal platform for IoT research, taking into account city requirements and transferring them to solutions enabled by IoT technology.

In Europe, the largest smart city initiatives completely focused on IoT is undertaken by the FP7 Smart Santander project. This project aims at deploying an IoT infrastructure comprising thousands of IoT devices spread across several cities (Santander, Guildford, Luebeck and Belgrade). This will enable simultaneous development and evaluation of services and execution of various research experiments, thus facilitating the creation of a smart city environment.

There are numerous important research challenges for smart city IoT applications:

• Overcoming traditional silo based organization of the cities, with each utility responsible for their own closed world. Although not technological, this is one of the main barriers

• Creating algorithms and schemes to describe information created by sensors in different applications to enable useful exchange of information between different city services

• Mechanisms for cost efficient deployment and even more important maintenance of such installations, including energy scavenging

• Ensuring reliable readings from a plethora of sensors and efficient calibration of a large number of sensors deployed everywhere from lamp-posts to waste bins

• Low energy protocols and algorithms

• Algorithms for analysis and processing of data acquired in the city and making “sense” out of it.

• IoT large scale deployment and integration

 

Smart Energy and the Smart Grid:-

 

There is increasing public awareness about the changing paradigm of our policy in energy supply, consumption and infrastructure. For several reasons our future energy supply should no longer be based on fossil resources. Neither is nuclear energy a future proof option. In consequence future energy supply needs to be based largely on various renewable resources. Increasingly focus must be directed to our energy consumption behavior. Because of its volatile nature such supply demands an intelligent and flexible electrical grid which is able to react to power fluctuations by controlling electrical energy sources (generation, storage) and sinks (load, storage) and by suitable reconfiguration. Such functions will be based on networked intelligent devices (appliances, micro-generation equipment, infrastructure, consumer products) and grid infrastructure elements, largely based on IoT concepts. Although this ideally requires insight into the instantaneous energy consumption of individual loads (e.g. devices, appliances or industrial equipment) information about energy usage on a per-customer level is a suitable first approach.

Future energy grids are characterized by a high number of distributed small and medium sized energy sources and power plants which may be combined virtually ad hoc to virtual power plants; moreover in the case of energy outages or disasters certain areas may be isolated from the grid and supplied from within by internal energy sources such as photovoltaic on the roofs, block heat and power plants or energy storages of a residential area (“islanding”). A grand challenge for enabling technologies such as cyber-physical systems

is the design and deployment of an energy system infrastructure that is able to provide blackout free electricity generation and distribution, is flexible enough to allow heterogeneous energy supply to or withdrawal from the grid, and is impervious to accidental or intentional manipulations. Integration of cyber-physical systems engineering and technology to the existing electric grid and other utility systems is a challenge. The increased system complexity poses technical challenges that must be considered as the system is operated in ways that were not intended when the infrastructure was originally built. As technologies and systems are incorporated, security remains a paramount concern to lower system vulnerability and protect stakeholder data. These challenges will need to be address as well by the IoT applications that integrate heterogeneous cyber-physical systems.

 

 

The developing Smart Grid, which is represented in above figure, is expected to implement a new concept of transmission network which is able to efficiently route the energy which is produced from both concentrated and distributed plants to the final user with high security and quality of supply standards. Therefore the Smart Grid is expected to be the implementation of a kind of “Internet” in which the energy packet is managed similarly to the data packet—across routers and gateways which autonomously can decide the best pathway for the packet to reach its destination with the best integrity levels. In this respect the “Internet of Energy” concept is defined as a network infrastructure based on standard and interoperable communication transceivers, gateways and protocols that will allow a real time balance between the local and the global generation and storage capability with the energy demand. This will also allow a high level of consumer awareness and involvement.

 

The Internet of Energy (IoE) provides an innovative concept for power distribution, energy storage, grid monitoring and communication as presented in below fig. It will allow units of energy to be transferred when and where it is needed. Power consumption monitoring will be performed on all levels, from local individual devices up to national and international level.

 

Fig. Internet of energy: Residential building ecosystem

 

Saving energy based on an improved user awareness of momentary energy consumption is another pillar of future energy management concepts. Smart meters can give information about the instantaneous energy consumption to the user, thus allowing for identification and elimination of energy wasting devices and for providing hints for optimizing individual energy consumption. In a smart grid scenario energy consumption will be manipulated by a volatile energy price which again is based on the momentary demand (acquired by smart meters) and the available amount of energy and renewable energy production. In a virtual energy marketplace software agents may negotiate energy prices and place energy orders to energy companies. It is already recognized that these decisions need to consider environmental information such as weather forecasts, local and seasonal conditions. These must be to a much finer time scale and spatial resolution.

In the long run electro mobility will become another important element of smart power grids. An example of electric mobility ecosystem is presented in below figure. Electric vehicles (EVs) might act as a power load as well as moveable energy storage linked as IoT elements to the energy information grid (smart grid). IoT enabled smart grid control may need to consider energy demand and offerings in the residential areas and along the major roads based on traffic forecast. EVs will be able to act as sink or source of energy based on their charge status, usage schedule and energy price which again may depend on abundance of (renewable) energy in the grid. This is the touch point from where the following telematics IoT scenarios will merge with smart grid IoT.

This scenario is based on the existence of an IoT network of a vast multitude of intelligent sensors and actuators which are able to communicate safely and reliably. Latencies are critical when talking about electrical control loops. Even though not being a critical feature, low energy dissipation should be mandatory. In order to facilitate interaction between different vendors’ products the technology should be based on a standardized communication protocol stack. When dealing with a critical part of the public infrastructure, data security is

of the highest importance. In order to satisfy the extremely high requirements on reliability of energy grids, the components as well as their interaction must feature the highest reliability performance.

 

 

                                                Fig. Electric mobility System

 

New organizational and learning strategies for sensor networks will be needed in order to cope with the shortcomings of classical hierarchical control concepts. The intelligence of smart systems does not necessarily need to be built into the devices at the systems’ edges. Depending on connectivity, cloud-based IoT concepts might be advantageous when considering energy dissipation and hardware effort.

Sophisticated and flexible data filtering, data mining and processing procedures and systems will become necessary in order to handle the high amount of raw data provided by billions of data sources. System and data models need to support the design of flexible systems which guarantee a reliable and secure real-time operation.

Some research challenges:

• Absolutely safe and secure communication with elements at the network edge

• Addressing scalability and standards interoperability

• Energy saving robust and reliable smart sensors/actuators

• Technologies for data anonymity addressing privacy concerns

• Dealing with critical latencies, e.g. in control loops

• System partitioning (local/cloud based intelligence)

• Mass data processing, filtering and mining; avoid flooding of communication network

• Real-time Models and design methods describing reliable interworking of heterogeneous systems (e.g. technical/economical/social/environmental systems). Identifying and monitoring critical system elements. Detecting critical overall system states in due time

• System concepts which support self-healing and containment of damage; strategies for failure contingency management

• Scalability of security functions

• Power grids have to be able to react correctly and quickly to fluctuations in the supply of electricity from renewable energy sources such as wind and solar facilities.

 

Smart Transportation and Mobility

 

The connection of vehicles to the Internet gives rise to a wealth of new possibilities and applications which bring new functionalities to the individuals and/or the making of transport easier and safer. In this context the concept of Internet of Vehicles (IoV) connected with the concept of Internet of Energy (IoE) represent future trends for smart transportation and mobility applications.

At the same time creating new mobile ecosystems based on trust, security and convenience to mobile/contactless services and transportation applications will ensure security, mobility and convenience to consumer-centric transactions and services.

Representing human behavior in the design, development, and operation of cyber physical systems in autonomous vehicles is a challenge. Incorporating human-in-the-loop considerations is critical to safety, dependability, and predictability. There is currently limited understanding of how driver behavior will be affected by adaptive traffic control cyber physical systems. In addition, it is difficult to account for the stochastic effects of the human driver in a mixed traffic environment (i.e., human and autonomous vehicle drivers) such as that found in traffic control cyber physical systems. Increasing integration calls for security measures that are not physical, but more logical while still ensuring there will be no security compromise. As cyber physical systems become more complex and interactions between components increases, safety and security will continue to be of paramount importance [27]. All these elements are of the paramount importance for the IoT ecosystems developed based on these enabling technologies. An example of standalone energy ecosystem is presented in below figure.

 

                                      Fig.  Standalone energy ecosystem.

 

When talking about IoT in the context of automotive and telematics, we may refer to the following application scenarios:

• Standards must be defined regarding the charging voltage of the power electronics, and a decision needs to be made as to whether the recharging processes should be controlled by a system within the vehicle or one installed at the charging station.

• Components for bidirectional operations and flexible billing for electricity need to be developed if electric vehicles are to be used as electricity storage media.

 

Smart Home, Smart Buildings and Infrastructure:

The rise of Wi-Fi’s role in home automation has primarily come about due to the networked nature of deployed electronics where electronic devices (TVs and AV receivers, mobile devices, etc.) have started becoming part of the home IP network and due the increasing rate of adoption of mobile computing devices (smart phones, tablets, etc.), in below figure. The networking aspects are bringing online streaming services or network playback, while becoming a mean to control of the device functionality over the network. At the same time

Mobile devices ensure that consumers have access to a portable ‘controller’ for the electronics connected to the network. Both types of devices can be used as gateways for IoT applications. In this context many companies are considering building platforms that integrate the building automation with entertainment, healthcare monitoring, energy monitoring and wireless sensor monitoring in the home and building environments.

 

 

                                                            Fig. Smart _home _platform

IoT applications using sensors to collect information about operating conditions combined with cloud hosted analytics software that analyze disparate data points will help facility managers become far more proactive about managing buildings at peak efficiency. Issues of building ownership (i.e., building owner, manager, or occupants) challenge integration with questions such as who pays initial system cost and who collects the benefits over time. A lack of collaboration between the subsectors of the building industry slows new technology adoption and can prevent new buildings from achieving energy, economic and environmental

performance targets.

In the context of the future Internet of Things, Intelligent Building Management Systems can be considered part of a much larger information system. This system is used by facilities managers in buildings to manage energy use and energy procurement and to maintain buildings systems. It is based on the infrastructure of the existing Intranets and the Internet, and therefore utilizes the same standards as other IT devices. Within this context reductions in the cost and reliability of WSNs are transforming building automation, by making the maintenance of energy efficient healthy productive work spaces in buildings increasingly cost effective.

 

Smart Factory and Smart Manufacturing:-

The role of the Internet of Things is becoming more prominent in enabling access to devices and machines, which in manufacturing systems, were hidden in well-designed silos. This evolution will allow the IT to penetrate further the digitized manufacturing systems. The IoT will connect the factory to a whole new range of applications, which run around the production. This could range from connecting the factory to the smart grid, sharing the production facility as a service or allowing more agility and flexibility within the production systems themselves. In this sense, the production system could be considered one of the many Internets of Things (IoT), where a new ecosystem for smarter and more efficient production could be defined.

The first evolutionary step towards a shared smart factory could be demonstrated by enabling access to today’s external stakeholders in order to interact with an IoT-enabled manufacturing system. These stakeholders could include the suppliers of the productions tools (e.g. machines, robots), as well as the production logistics (e.g. material flow, supply chain management), and maintenance and re-tooling actors. An IoT-based architecture that challenges the hierarchical and closed factory automation pyramid, by allowing the abovementioned stakeholders to run their services in multiple tier flat production system is proposed in. This means that the services and applications of tomorrow do not need to be defined in an intertwined and strictly linked manner to the physical system, but rather run as services in a shared physical world. The room for innovation in the application space could be increased in the same degree of magnitude as this has been the case for embedded applications or Apps, which have exploded since the arrival of smart phones (i.e. the provision of a clear and well standardized interface to the embedded hardware of a mobile phone to be accessed by all types of Apps). Key enabler to this ICT-driven smart and agile manufacturing lies in the way we manage and access the physical world, where the sensors, the actuators, and also the production unit should be accessed, and managed in the same or at least similar IoT standard interfaces and technologies. These devices are then providing their services in a well-structured manner, and can be managed and orchestrated for a multitude of applications running in parallel.

The convergence of microelectronics and micromechanical parts within a sensing device, the ubiquity of communications, the rise of micro-robotics, the customization made possible by software will significantly change the world of manufacturing. In addition, broader pervasiveness of telecommunications in many environments is one of the reasons why these environments take the shape of ecosystems.

Some of the main challenges associated with the implementation of cyber physical systems include affordability, network integration, and the interoperability of engineering systems.

Most companies have a difficult time justifying risky, expensive, and uncertain investments for smart manufacturing across the company and factory level. Changes to the structure, organization, and culture of manufacturing occur slowly, which hinders technology integration. Pre-digital age control systems are infrequently replaced because they are still serviceable. Retrofitting these existing plants with cyber-physical systems is difficult and expensive. The lack of a standard industry approach to production management results in

customized software or use of a manual approach. There is also a need for a unifying theory of non-homogeneous control and communication systems.

Smart Health:-

The market for health monitoring devices is currently characterized by application-specific solutions that are mutually non-interoperable and are made up of diverse architectures. While individual products are designed to cost targets, the long-term goal of achieving lower technology costs across current and future sectors will inevitably be very challenging unless a more coherent approach is used. An example of a smart health platform is given in below figure.

 

                        Fig. Example of smart _ health _ platform

 

The links between the many applications in health monitoring are:

• Applications require the gathering of data from sensors

• Applications must support user interfaces and displays

• Applications require network connectivity for access to infrastructural services

• Applications have in-use requirements such as low power, robustness, durability, accuracy and reliability.

IoT applications are pushing the development of platforms for implementing ambient assisted living (AAL) systems that will offer services in the areas of assistance to carry out daily activities, health and activity monitoring, enhancing safety and security, getting access to medical and emergency systems, and facilitating rapid health support. The main objective is to enhance life quality for people who need permanent support or monitoring, to decrease barriers for monitoring important health parameters, to avoid unnecessary healthcare costs and efforts, and to provide the right medical support at the right time.

Challenges exist in the overall cyber-physical infrastructure (e.g., hardware, connectivity, software development and communications), specialized processes at the intersection of control and sensing, sensor fusion and decision making, security, and the compositionality of cyber-physical systems. Proprietary medical devices in general were not designed for interoperation with other medical devices or computational systems, necessitating advancements in networking and distributed communication within cyber-physical architectures. Interoperability and closed loop systems appears to be the key for success. System security will be critical as communication of individual patient data is communicated over cyber-physical networks. In addition, validating data acquired from patients using new cyber-physical technologies against existing gold standard data acquisition methods will be a challenge. Cyber-physical technologies will also need to be designed to operate with minimal patient training or cooperation.

New and innovative technologies are needed to cope with the trends on wired, wireless, high-speed interfaces, miniaturization and modular design approaches for products having multiple technologies integrated. The communication technologies are addressing different levels and layers in the smart health platforms, as shown in below figure.

 

 

 

Fig. communication layers in smart _ health _platforms.

 

Internet of Things applications have a future market potential for electronic health services and connected telecommunication industry. In this context, the telecommunications can foster the evolution of ecosystems in different application areas. Medical expenditures are in the range of 10% of the European gross domestic product. The market segment of telemedicine, one of lead markets of the future will have growth rates of more than 19%. Convergence of bio parameter sensing, communication technologies and engineering is turning health care into a new type of information industry. In this context the progress beyond state of the art for IoT applications for healthcare is envisaged as follows:

• Standardization of interface from sensors and MEMS for an open platform to create a broad and open market for bio-chemical innovators.

• Providing a high degree of automation in the taking and processing of information;

• Real-time data over networks (streaming and regular single measurements) to be available to clinicians anywhere on the web with appropriate software and privileges; data travelling over trusted web.

• Reuse of components over smooth progression between low-cost “home health” devices and higher cost “professional” devices.

• Data needs to be interchangeable between all authorized devices in use within the clinical care pathway, from home, ambulance, clinic, GP, hospital, without manual transfer of data.

 

Food and Water Tracking and Security

 

Food and fresh water are the most important natural resources in the world. Organic food produced without addition of certain chemical substances and according to strict rules, or food produced in certain geographical areas will be particularly valued. Similarly, fresh water from mountain springs is already highly valued. In the future it will be very important to bottle and distribute water adequately. This will inevitably lead to attempts to forge the origin or the production process. Using IoT in such scenarios to secure tracking of food or water from the production place to the consumer is one of the important topics. This has already been introduced to some extent in regard to beef meat. After the “mad cow disease” outbreak in the late 20th century, some beef manufacturers together with large supermarket chains in Ireland are offering “from pasture to plate” traceability of each package of beef meat in an attempt to assure consumers that the meat is safe for consumption. However, this is limited to certain types of food and enables tracing back to the origin of the food only, without information on the production process.

IoT applications need to have a development framework that will assure the following:

• The things connected to the Internet need to provide value. The things that are part of the IoT need to provide a valuable service at a price point that enables adoption, or they need to be part of a larger system that does.

• Use of rich ecosystem for the development. The IoT comprises things, sensors, communication systems, servers, storage, analytics, and end user services. Developers, network operators, hardware manufacturers, and software providers need to come together

to make it work. The partnerships among the stakeholders will provide functionality easily available to the customers.

• Systems need to provide APIs that let users take advantage of systems suited to their needs on devices of their choice. APIs also allow developers to innovate and create something interesting using the system’s data and services, ultimately driving the system’s use and adoption.

• Developers need to be attracted since the implementation will be done on a development platform. Developers using different tools to develop solutions, which work across device platforms playing a key role for future IoT deployment.

• Security needs to be built in. Connecting things previously cut off from the digital world will expose them to new attacks and challenges.

The research challenges are:

• Design of secure, tamper-proof and cost-efficient mechanisms for tracking food and water from production to consumers, enabling immediate notification of actors in case of harmful food and communication of trusted information.

• Secure way of monitoring production processes, providing sufficient information and confidence to consumers. At the same time details of the production processes which might be considered as intellectual property, should not be revealed.

• Ensure trust and secure exchange of data among applications and infrastructures (farm, packing industry, retailers) to prevent the introduction of false or misleading data, which can affect the health of the citizens or create economic damage to the stakeholders.

 

Social Networks and IoT:-

 

From a user perspective, abstract connectedness and real-world interdependencies are not easily captured mentally. What users however easily relate to is the social connectedness of family and friends. The user engagement in IoT awareness could build on the Social Network paradigm, where the users interact with the real world entities of interest via the social network paradigm. This combination leads to interesting and popular applications, which will become more sophisticated and innovative.

Future research directions in IoT applications should consider the social dimension, based on integration with social networks which can be seen as another bundle of information streams. Note also that social networks are characterized by the massive participation of human users. Hence, the wave of social IoT applications is likely to be built over successful paradigms of participatory sensing applications, which will be extending on the basis of an increased number of autonomous interacting Internet-connected devices. The use of the social networks metaphor for the interactions between Internet-connected objects has been recently proposed and it could enable novel forms of M2M, interactions and related applications.

                        

Related Future Technologies:-

Cloud Computing: cloud computing has been established as one of the major building blocks of the Future Internet. New technology enablers have progressively fostered virtualization at different levels and have allowed the various paradigms known as “Applications as a Service”, “Platforms as a Service” and “Infrastructure and Networks as a Service”. Such trends have greatly helped to reduce cost of ownership and management of associated virtualized resources, lowering the market entry threshold to new players and enabling provisioning of new services. With the virtualization of objects being the next natural step in this trend, the convergence of cloud computing and Internet of Things will enable unprecedented opportunities in the IoT services arena. As part of this convergence, IoT applications (such as sensor-based services) will be delivered on-demand through a cloud environment. This extends beyond the need to virtualize sensor data stores in a scalable fashion. It asks for virtualization of Internet-connected objects and their ability to become orchestrated into on-demand services (such as Sensing-as-a-Service). Moreover, generalizing the serving scope of an Internet-connected object beyond the “sensing service”, it is not hard to imagine virtual objects that will be integrated into the fabric of future IoT services and shared and reused in different contexts, projecting an “Object as a Service” paradigm aimed as in other virtualized resource domains) at minimizing costs of ownership and maintenance of objects, and fostering the creation of innovative IoT services.

Relevant topics for the research agenda will therefore include:

• The description of requests for services to a cloud/IoT infrastructure,

• The virtualization of objects,

• Tools and techniques for optimization of cloud infrastructures subject to utility and SLA criteria,

• The investigation of

◦ utility metrics and

◦ (reinforcement) learning techniques that could be used for gauging on-demand IoT services in a cloud environment,

• Techniques for real-time interaction of Internet-connected objects within a cloud environment through the implementation of lightweight interactions and the adaptation of real-time operating systems.

• Access control models to ensure the proper access to the data stored in the cloud.

 

IoT and Semantic Technologies

The 2010 SRA has identified the importance of semantic technologies towards discovering devices, as well as towards achieving semantic interoperability. During the past years, semantic web technologies have also proven their ability to link related data (web-of-data concept), while relevant tools and techniques have just emerged. Future research on IoT is likely to embrace the concept of Linked Open Data. This could build on the earlier integration of ontologies (e.g., sensor ontologies) into IoT infrastructures and applications.

Semantic technologies will also have a key role in enabling sharing and re-use of virtual objects as a service through the cloud, as illustrated in the previous paragraph. The semantic enrichment of virtual object descriptions will realize for IoT what semantic annotation of web pages has enabled in the Semantic Web. Associated semantic-based reasoning will assist IoT users to more independently find the relevant proven virtual objects to improve the performance or the effectiveness of the IoT applications they intend to use.

 

Autonomy Spectacular advances in technology have introduced increasingly complex and large scale computer and communication systems. Autonomic computing, inspired by biological systems, has been proposed as a grand challenge that will allow the systems to self-manage this complexity, using high-level objectives and policies defined by humans. The objective is to provide some self-x properties to the system, where x can be adaptation, organization, optimization, configuration, protection, healing, discovery, description, etc.

The Internet of Things will exponentially increase the scale and the complexity of existing computing and communication systems. Autonomy is thus an imperative property for IoT systems to have. However, there is still a lack of research on how to adapt and tailor existing research on autonomic computing to the specific characteristics of IoT, such as high dynamicity and distribution, real-time nature, resource constraints, and lossy environments.

Properties of Autonomic IoT Systems

The following properties are particularly important for IoT systems and need further research:

Self-adaptation

In the very dynamic context of the IoT, from the physical to the application layer, self-adaptation is an essential property that allows the communicating nodes, as well as services using them, to react in a timely manner to the continuously changing context in accordance with, for instance, business policies or performance objectives that are defined by humans. IoT systems should be able to reason autonomously and give self-adapting decisions. Cognitive radios at physical and link layers, self-organizing network protocols, automatic service discovery and (re-)bindings at the application layer are important enablers for

the self-adapting IoT.

 

Self-organization

In IoT systems — and especially in WS&ANs — it is very common to have nodes that join and leave the network spontaneously. The network should therefore be able to re-organize itself against this evolving topology. Self organizing, energy efficient routing protocols have a considerable importance in the IoT applications in order to provide seamless data exchange throughout the highly heterogeneous networks. Due to the large number of nodes, it is preferable to consider solutions without a central control point like for instance clustering approaches. When working on self-organization, it is also very crucial to consider the energy consumption of nodes and to come up with solutions that maximize the IoT system lifespan and the communication efficiency within that system.

 

Self-optimization

Optimal usage of the constrained resources (such as memory, bandwidth, processor, and most importantly, power) of IoT devices is necessary for sustainable and long-living IoT deployments. Given some high-level optimization goals in terms of performance, energy consumption or quality of service, the system itself should perform necessary actions to attain its objectives.

Self-configuration

IoT systems are potentially made of thousands of nodes and devices such as sensors and actuators. Configuration of the system is therefore very complex and difficult to handle by hand. The IoT system should provide remote configuration facilities so that self-management applications automatically configure necessary parameters based on the needs of the applications and users. It consists of configuring for instance device and network parameters,

Installing / uninstalling / upgrading software, or tuning performance parameters.

 

Self-protection

Due to its wireless and ubiquitous nature, IoT will be vulnerable to numerous malicious attacks. As IoT is closely related to the physical world, the attacks will for instance aim at controlling the physical environments or obtaining private data. The IoT should autonomously tune itself to different levels of security and privacy, while not affecting the quality of service and quality of experience.

 

Self-healing

The objective of this property is to detect and diagnose problems as they occur and to immediately attempt to fix them in an autonomous way. IoT systems should monitor continuously the state of its different nodes and detect whenever they behave differently than expected. It can then perform actions to fix the problems encountered. Encounters could include re-configuration parameters or installing a software update.

 

Self-description

Things and resources (sensors and actuators) should be able to describe their characteristics and capabilities in an expressive manner in order to allow other communicating objects to interact with them. Adequate device and service description formats and languages should be defined, possibly at the semantic level. The existing languages should be re-adapted in order to find a trade-off between the expressiveness, the conformity and the size of the descriptions.

Self-description is a fundamental property for implementing plug and play resources and devices.

 

Self-discovery

Together with the self-description, the self-discovery feature plays an essential role for successful IoT deployments. IoT devices/services should be dynamically discovered and used by the others in a seamless and transparent way. Only powerful and expressive device and service discovery protocols (together with description protocols) would allow an IoT system to be fully dynamic (topology-wise).

 

Self-matchmaking

To fully unlock the IoT potential, virtual objects will have to:

• Be reusable outside the context for which they were originally deployed and

• Be reliable in the service they provide.

On the one hand, IoT services will be able to exploit enriched availability of underlying objects. They will also have to cope with their unreliable nature and be able to find suitable “equivalent object” alternatives in case of failure, unreachability etc. Such envisaged dynamic service-enhancement environments will require self-matchmaking features (between services and objects and vice versa) that will prevent users of IoT future services from having to (re-)configure objects themselves.

 

Self-energy-supplying

And finally, self-energy-supplying is a tremendously important (and very IoT specific) feature to realize and deploy sustainable IoT solutions. Energy harvesting techniques (solar, thermal, vibration, etc.) should be preferred as a main power supply, rather than batteries that need to be replaced regularly, and that have a negative effect on the environment.

 

Situation Awareness and Cognition

Integration of sensory, computing and communication devices (e.g. smart phones, GPS) into the Internet is becoming common. This is increasing the ability to extract “content” from the data generated and understand it from the viewpoint of the wider application domain (i.e. meta-data). This ability to extract content becomes ever more crucial and complex, especially when we consider the amount of data that is generated. Complexity can be reduced through the integration of self-management and automatic learning features (i.e. exploiting cognitive principles). The application of cognitive principles in the extraction of “content” from data can also serve as a foundation towards creating overall awareness of a current situation. This then gives a system the ability to respond to changes within its situational environment, with little or no direct instruction from users and therefore facilitate customized, dependable and reliable service creation.

 

Infrastructure

The Internet of Things will become part of the fabric of everyday life. It will become part of our overall infrastructure just like water, electricity, telephone, TV and most recently the Internet. Whereas the current Internet typically connects full-scale computers, the Internet of Things (as part of the Future Internet) will connect everyday objects with a strong integration into the physical world.

Plug and Play Integration If we look at IoT-related technology available today, there is a huge heterogeneity. It is typically deployed for very specific purposes and the configuration requires significant technical knowledge and may be cumbersome. To achieve a true Internet of Things we need to move away from such small-scale, vertical application silos, towards a horizontal infrastructure on which a variety of applications can run simultaneously. This is only possible if connecting a thing to the Internet of Things becomes as simple as plugging it in and switching it on. Such plug and play functionality requires an infrastructure that supports it, starting from the networking level and going beyond it to the application level. This is closely related to the aspects discussed in the section on autonomy. On the networking level, the plug & play functionality has to enable the communication, features like the ones

provided by IPv6 are in the directions to help in this process. Suitable infrastructure components have then to be discovered to enable the integration into the Internet of Things. This includes announcing the functionalities provided, such as what can be sensed or what can be actuated.

 

Infrastructure Functionality The infrastructure needs to support applications in finding the things required. An application may run anywhere, including on the things themselves. Finding things is not limited to the start-up time of an application. Automatic adaptation

is needed whenever relevant new things become available, things become unavailable or the status of things changes. The infrastructure has to support the monitoring of such changes and the adaptation that is required as a result of the changes.

Semantic Modeling of Things To reach the full potential of the Internet of Things, semantic information regarding the things, the information they can provide or the actuations they

can perform need to be available. It is not sufficient to know that there is a temperature sensor or an electric motor, but it is important to know which temperature the sensor measures: the indoor temperature of a room or the temperature of the fridge, and that the electric motor can open or close the blinds or move something to a different location. As it may not be possible

to provide such semantic information by simply switching on the thing, the infrastructure should make adding it easy for users. Also, it may be possible to derive semantic information, given some basic information and additional knowledge, e.g. deriving information about a room, based on the information that a certain sensor is located in the room. This should be enabled by the infrastructure.

 

Physical Location and Position As the Internet of Things is strongly rooted in the physical world, the notion of physical location and position are very important, especially for finding things, but also for deriving knowledge. Therefore, the infrastructure has to support finding things according to location (e.g. geo-location based discovery). Taking mobility into account, localization technologies will play an important role for the Internet of Things and may become embedded into the infrastructure of the Internet of Things.

Security and Privacy In addition, an infrastructure needs to provide support for security and privacy functions including identification, confidentiality, integrity, non-repudiation authentication and authorization. Here the heterogeneity and the need for interoperability among different ICT systems deployed in the infrastructure and the resource limitations of IoT devices (e.g., Nano sensors) have to be taken into account.

 

Infrastructure-related Research Questions Based on the description above of what an infrastructure for the Internet of Things should look like, we see the following challenges and research questions:

• How can the plug and play functionality be achieved taking into account the heterogeneity of the underlying technology?

• How should the resolution and discovery infrastructure look to enable finding things efficiently?

• How can monitoring and automatic adaptation to be supported by the infrastructure?

• How can semantic information be easily added and utilized within the infrastructure?

• How can new semantic information be derived from existing semantic information based on additional knowledge about the world, and how can this be supported by the infrastructure?

• How can the notion of physical location be best reflected in the infrastructure to support the required functionalities mentioned above?

• How should the infrastructure support for security and privacy look?

• How can the infrastructure support accounting and charging as the basis for different IoT business models?

• How we can provide security and privacy functions at infrastructure level on the basis of heterogeneous and resource limited components of the infrastructure?

 

Networks and Communication

 

Present communication technologies span the globe in wireless and wired networks and support global communication by globally-accepted communication standards. The Internet of Things Strategic Research and Innovation Agenda (SRIA) intends to lay the foundations for the Internet of Things to be developed by research through to the end of this decade and for subsequent innovations to be realized even after this research period. Within this time frame the number of connected devices, their features, their distribution and implied communication requirements will develop; as will the communication infrastructure and the networks being used. Everything will change significantly. Internet of Things devices will be contributing to and strongly driving this development. Changes will first be embedded in given communication standards and networks and subsequently in the communication and network structures defined by these standards.

Networking Technology The evolution and pervasiveness of present communication technologies has the potential to grow to unprecedented levels in the near future by including the world of things into the developing Internet of Things.

Network users will be humans, machines, things and groups of them.

Complexity of the Networks of the Future A key research topic will be to understand the complexity of these future networks and the expected growth of complexity due to the growth of Internet of Things. The research results of this topic will give guidelines and timelines for defining the requirements for network functions, for network management, for network growth and network composition and variability.

Wireless networks cannot grow without such side effects as interference.

Growth of Wireless Networks

Wireless networks especially will grow largely by adding vast amounts of small Internet of Things devices with minimum hardware, software and intelligence, limiting their resilience to any imperfections in all their functions. Based on the research of the growing network complexity, caused by the Internet of Things, predictions of traffic and load models will have to guide further research on unfolding the predicted complexity to real networks, their standards and on-going implementations.

Mankind is the maximum user group for the mobile phone system, which is the most prominent distributed system worldwide besides the fixed telephone system and the Internet. Obviously the number of body area networks and of networks integrated into clothes and further personal area networks — all based on Internet of Things devices — will be of the order of the current human population. They are still not unfolding into reality. In a second stage cross network cooperative applications are likely to develop, which are not yet envisioned.

Mobile Networks Applications such as body area networks may develop into an autonomous world of small, mobile networks being attached to their bearers and being connected to the Internet by using a common point of contact. The mobile phone of the future could provide this function. Analyzing worldwide industrial processes will be required to find limiting set sizes for the number of machines and all things being implied or used within their range in order to develop an understanding of the evolution steps to the Internet of Things in industrial environments.

 

Expanding Current Networks to Future Networks Generalizing the examples given above, the trend may be to expand current end user network nodes into networks of their own or even a hierarchy of networks. In this way networks will grow on their current access side by unfolding these outermost nodes into even smaller, attached networks, spanning the Internet of Things in the future. In this context networks or even networks of networks will be mobile by themselves.

Overlay Networks Even if network construction principles should best be unified for the worldwide Internet of Things and the networks bearing it, there will not be one unified network, but several. In some locations even multiple networks overlaying one another physically and logically. The Internet and the Internet of Things will have access to large parts of these networks. Further sections may be only represented by a top access node or may not be visible at all globally. Some networks will by intention be shielded against external access and secured against any intrusion on multiple levels.

Network Self-organization Wireless networks being built for the Internet of Things will show a large degree of ad-hoc growth, structure, organization, and significant change in time, including mobility. These constituent features will have to be reflected in setting them up and during their operation. Self-organization principles will be applied to configuration by context sensing, especially concerning autonomous negotiation of interference management

and possibly cognitive spectrum usage, by optimization of network structure and traffic and load distribution in the network, and in self-healing of networks. All will be done in heterogeneous environments, without interaction by users or operators.

 

IPv6, IoT and Scalability The current transition of the global Internet to IPv6 will provide a virtually unlimited number of public IP addresses able to provide bidirectional and symmetric (true M2M) access to Billions of smart things. It will pave the way to new models of IoT interconnection and integration. It is raising numerous questions: How can the Internet infrastructure cope with a highly heterogeneous IoT and ease a global IoT interconnection? How interoperability will happen with legacy systems? What will be the impact of the transition to IPv6 on IoT integration, large scale deployment and interoperability? It will probably require developing an IPv6-based European research infrastructure for the IoT.

Green Networking Technology Network technology has traditionally developed along the line of predictable progress of implementation technologies in all their facets. Given the enormous expected growth of network usage and the number of user nodes in the future, driven by the Internet of Things, there is a real need to minimize the resources for implementing all network elements and the energy being used for their operation.

Disruptive developments are to be expected by analyzing the energy requirements of current solutions and by going back to principles of communication in wired, optical and wireless information transfer. Research done by Bell Labs in recent years shows that networks can achieve an energy efficiency increase of a factor of 1,000 compared to current technologies. The results of the research done by the Green Touch consortium should be integrated into the development of the network technologies of the future. These network technologies have to be appropriate to realize the Internet of Things and the Future Internet in their most expanded state to be anticipated by the imagination of the experts.

 

Communication Technology

Unfolding the Potential of Communication Technologies The research aimed at communication technology to be undertaken in the coming decade will have to develop and unfold all potential communication profiles of Internet of Things devices, from bit-level communication to continuous

 

 

                                      Fig. Growth mobile device market.

 

data streams, from sporadic connections to connections being always on, from standard services to emergency modes, from open communication to fully secured communication, spanning applications from local to global, based on single devices to globally-distributed sets of devices. In this context the growth in mobile device market, shown in above figure, is

pushing the deployment of Internet of Things applications where these mobile devices (smart phones, tablets, etc. are seen as gateways for wireless sensors and actuators. Based on this research the anticipated bottlenecks in communications and in networks and services will have to be quantified using appropriate theoretical methods and simulation approaches. Communications technologies for the Future Internet and the Internet of Things will have to avoid such bottlenecks by construction not only for a given status of development, but for the whole path to fully developed and still growing nets.

Correctness of Construction Correctness of construction of the whole system is a systematic process that starts from the small systems running on the devices up to network and distributed applications. Methods to prove the correctness of structures and of transformations of structures will be required, including protocols of communication between all levels of communication stacks used in the Internet of Things and the Future Internet.

These methods will be essential for the Internet of Things devices and systems, as the smallest devices will be implemented in hardware and many types will not be programmable. Interoperability within the Internet of Things will be a challenge even if such proof methods are used systematically.

An Unified Theoretical Framework for Communication between processes running within an operating system on a single or multi core processor, communication between processes running in a distributed computer system, and the communication between devices and structures in the Internet of Things and the Future Internet using wired and wireless channels shall be merged into a unified minimum theoretical framework covering and including formalized communication within protocols.

In this way minimum overhead, optimum use of communication channels and best handling of communication errors should be achievable. Secure communication could be embedded efficiently and naturally as a basic service.

Energy-Limited Internet of Things Devices and their Communication

Many types of Internet of Things devices will be connected to the energy grid all the time; on the other hand a significant subset of Internet of Things devices will have to rely on their own limited energy resources or energy harvesting throughout their lifetime. Given this spread of possible implementations and the expected importance of minimum-energy Internet of Things devices and applications, an important topic of research will have to be the search for minimum energy, minimum computation, slim and lightweight solutions through all layers of Internet of Things communication and applications.

Challenge the Trend to Complexity The inherent trend to higher complexity of solutions on all levels will be seriously questioned — at least with regard to minimum energy Internet of Things devices and services. Their communication with the access edges of the Internet of Things network shall be optimized cross domain with their implementation space and it shall be compatible with the correctness of the construction approach.

Disruptive Approaches Given these special restrictions, non-standard, but already existing ideas should be carefully checked again and be integrated into existing solutions, and disruptive approaches shall be searched and researched with high priority. This very special domain of the Internet of Things may well develop into its most challenging and most rewarding domain—from a research point of view and, hopefully, from an economical point of view as well.

 

Processes

The deployment of IoT technologies will significantly impact and change the way enterprises do business as well as interactions between different parts of the society, affecting many processes. To be able to reap the many potential benefits that has been postulated for the IoT, several challenges regarding the modeling and execution of such processes need to be solved in order to see wider and in particular commercial deployments of IoT. The special characteristics of IoT services and processes have to be taken into account and it is likely that existing business process modeling and execution languages as well as service description languages such as USDL, will need to be extended.

Adaptive and Event-driven Processes One of the main benefits of IoT integration is that processes become more adaptive to what is actually happening in the real world. Inherently, this is based on events that are either detected directly or by real-time analysis of sensor data. Such events can occur at any time in the process. For some of the events, the occurrence probability is very low: one knows that they might occur, but not when or if at all. Modeling such events into a process is cumbersome, as they would have to be included into all possible activities, leading to additional complexity and making it more difficult to understand the modeled process, in particular the main flow of the process (the 80% case). Secondly, how to

react to a single event can depend on the context, i.e. the set of events that have been detected previously. Research on adaptive and event-driven processes could consider the extension

and exploitation of EDA (Event Driven Architectures) for activity monitoring and complex event processing (CEP) in IoT systems. EDA could be combined with business process execution languages in order to trigger specific steps or parts of a business process.

Dealing with Unreliable Data When dealing with events coming from the physical world (e.g., via sensors or signal processing algorithms), a degree of unreliability and uncertainty is introduced into the processes. If decisions in a business process are to be

taken based on events that have some uncertainty attached, it makes sense to associate each of these events with some value for the quality of information (QoI). In simple cases, this allows the process modeler to define thresholds: e.g., if the degree of certainty is more than 90%, then it is assumed that the event really happened. If it is between 50% and 90%, some other activities will be triggered to determine if the event occurred or not. If it is below 50%, the event is ignored. Things get more complex when multiple events are involved: e.g., one event with 95% certainty, one with 73%, and another with 52%. The underlying services that fire the original events have to be programmed to attach such QoI values to the events. From a BPM perspective, it is essential that such information can be captured, processed and expressed in the modeling notation language, e.g. BPMN. Secondly, the syntax and semantics of such QoI values need to be standardized. Is it a simple certainty percentage as in the examples above, or should it be something more expressive (e.g., a range within which the true value lies)? Relevant techniques should not only address uncertainty in the flow of a given (well-known) IoT-based business process, but also in the overall structuring and modeling of (possibly unknown or unstructured) process flows. Techniques for fuzzy modeling of data and processes could be considered.

 

Dealing with Unreliable Resources

 Not only is the data from resources inherently unreliable, but also the resources providing the data themselves, e.g., due to the failure of the hosting device. Processes relying on such resources need to be able to adapt to such situations. The first issue is to detect such a failure. In the case that a process is calling a resource directly, this detection is trivial. When we’re talking about resources that might generate an event at one point in time (e.g., the resource

that monitors the temperature condition within the truck and sends an alert if it has become too hot), it is more difficult. Not having received any event can be because of resource failure, but also because there was nothing to report. Likewise, the quality of the generated reports should be regularly audited for correctness. Some monitoring software is needed to detect such problems; it is unclear though if such software should be part of the BPM execution environment or should be a separate component. Among the research challenges is the synchronization of monitoring processes with run-time actuating processes, given that management planes (e.g., monitoring software) tend to operate at different time scales from IoT processes (e.g., automation and control systems in manufacturing).

 

Highly Distributed Processes When interaction with real-world objects and devices is required, it can make sense to execute a process in a decentralized fashion. As stated in, the decomposition and decentralization of existing business processes increases scalability and performance, allows better decision making and could even lead to new business models and revenue streams through entitlement management of software products deployed on smart items. For example, in environmental monitoring or supply chain tracking applications, no messages need to be sent to the central system as long as everything is within the defined limits. Only if there is a deviation, an alert (event) needs to be generated, which in turn can lead to an adaptation of the overall process. From a business process modeling perspective though, it should be possible to define the process centrally, including the fact that some activities (i.e., the monitoring) will be done remotely. Once the complete process is modelled, it should then be possible to deploy the related services to where they have to be executed, and then run and monitor the complete process.

Relevant research issues include tools and techniques for the synthesis, the verification and the adaptation of distributed processes, in the scope of a volatile environment (i.e. changing contexts, mobility, internet connected objects/devices that join or leave).

 

Data Management

Data management is a crucial aspect in the Internet of Things. When considering a world of objects interconnected and constantly exchanging all types of information, the volume of the generated data and the processes involved in the handling of those data become critical.

A long-term opportunity for wireless communications chip makers is the rise of Machine-to-Machine (M2M) computing, which one of the enabling technologies for Internet of Things. This technology spans abroad range of applications. While there is consensus that M2M is a promising pocket of growth, analyst estimates on the size of the opportunity diverge by a factor of four. Conservative estimates assume roughly 80 million to 90 million M2M units will be sold in 2014, whereas more optimistic projections forecast sales of 300 million units. Based on historical analyses of adoption curves for similar disruptive technologies, such as portable MP3 players and antilock braking systems for cars, it is believed that unit sales in M2M could rise by as much as a factor of ten over the next five years shown in below figure.

There are many technologies and factors involved in the “data management” within the IoT context. Some of the most relevant concepts which enable us to understand the challenges and opportunities of data management are:

• Data Collection and Analysis

• Big Data

• Semantic Sensor Networking

• Virtual Sensors

• Complex Event Processing.

 

 

                                                Fig. Growth in M2M communication

 

Data Collection and Analysis (DCA)

Data Collection and Analysis modules or capabilities are the essential components of any IoT platform or system, and they are constantly evolving in order to support more features and provide more capacity to external components (either higher layer applications leveraging on the data stored by the DCA module or other external systems exchanging information for analysis or processing). The DCA module is part of the core layer of any IoT platform. Some of the main functions of a DCA module are:

User/customer data storing: Provides storage of the customer’s information collected by sensors

User data & operation modeling: Allows the customer to create new sensor data models to accommodate collected information and the modeling of the supported operations

On demand data access: Provides APIs to access the collected data

Device event publish/subscribe/forwarding/notification:

Provides APIs to access the collected data in real time conditions

Customer rules/filtering:

Allows the customer to establish its own filters and rules to correlate events

Customer task automation:

Provides the customer with the ability to manage his automatic processes. Example: scheduled platform originated data collection

Customer workflows:

Allows the customer to create his own work flow to process the incoming events from a device

Multitenant structure:

Provides the structure to support multiple organizations and reseller schemes. In the coming years, the main research efforts should be targeted to some features that should be included in any Data Collection and Analysis platform:

Multi-protocol. DCA platforms should be capable of handling or understanding different input (and output) protocols and formats. Different standards and wrappings for the submission of observations should be supported.

De-centralization. Sensors and measurements/observations captured by them should be stored in systems that can be de-centralized from a single platform. It is essential that different components, geographically distributed in different locations may cooperate and

exchange data. Related with this concept, federation among different systems will make possible the global integration of IoT architectures.

Security. DCA platforms should increase the level of data protection and security, from the transmission of messages from devices (sensors, actuators, etc.) to the data stored in the platform.

Data mining features. Ideally, DCA systems should also integrate capacities for the processing of the stored info, making it easier to extract useful data from the huge amount of contents that may be recorded.

 

Big Data

Big data is about the processing and analysis of large data repositories, so disproportionately large that it is impossible to treat them with the conventional tools of analytical databases. Some statements suggest that we are entering the “Industrial Revolution of Data”, where the majority of data will be stamped out by machines. These machines generate data a lot faster than people can, and their production rates will grow exponentially with Moore’s Law. Storing this data is cheap, and it can be mined for valuable information. Examples of this tendency include:

• Web logs;

• RFID;

• Sensor networks;

• Social networks;

• Social data (due to the Social data revolution);

• Internet text and documents;

• Internet search indexing;

• Call detail records;

• Astronomy, atmospheric science, genomics, biogeochemical, biological, and other complex and/or interdisciplinary scientific research;

• Military surveillance;

• Medical records;

• Photography archives;

• Video archives;

• Large scale e-commerce.

The trend is part of an environment quite popular lately: the proliferation of web pages, image and video applications, social networks, mobile devices, apps, sensors, and so on, able to generate, according to IBM, more than 2.5 quintillion bytes per day, to the extent that 90% of the world’s data have been created over the past two years. Big data requires exceptional technologies to efficiently process large quantities of data within a tolerable amount of time. Technologies being applied to big data include massively parallel processing (MPP) databases, data-mining grids, distributed file systems, distributed databases, cloud computing platforms, the Internet, and scalable storage systems. These technologies are linked with many aspects derived from the analysis of natural phenomena such as climate and seismic data to environments such as health, safety or, of course, the business environment. The biggest challenge of the Peta byte Age will not be storing all that data, it will be figuring out how to make sense of it. Big data deals with unconventional, unstructured databases, which can reach peta bytes, exa bytes or zetta bytes, and require specific treatments for their needs, either in terms of storage or processing/display. Companies focused on the big data topic, such as Google, Yahoo!, Face book or some specialized start-ups, currently do not use Oracle tools to process their big data repositories, and they opt instead for an approach based on

distributed, cloud and open source systems. An extremely popular example is Hadoop, an Open Source framework in this field that allows applications to work with huge repositories of data and thousands of nodes. These have been inspired by Google tools such as the Map Reduce and Google File system, or No SQL systems, which in many cases do not comply with the ACID (atomicity, consistency, isolation, durability) characteristics of conventional

data bases. In future, it is expected a huge increase in adoption, and many, many questions that must be addressed. Among the imminent research targets in this field are:

Privacy. Big data systems must avoid any suggestion that users and citizens in general perceive that their privacy is being invaded.

• Integration of both relational and No SQL systems.

• More efficient indexing, search and processing algorithms, allowing the extraction of results in reduced time and, ideally, near to “real time” scenarios.

Optimized storage of data. Given the amount of information that the new IoT world may generate, it is essential to avoid that the storage requirements and costs increase exponentially.

 

Semantic Sensor Networks and Semantic Annotation of Data The information collected from the physical world in combination with the existing resources and services on the Web facilitate enhanced methods to obtain business intelligence, enabling the construction of new types of front-end application and services which could revolutionize the way organizations

And people use Internet services and applications in their daily activities. Annotating and interpreting the data, and also the network resources, enables management of the large scale distributed networks that are often resource and energy constrained, and provides means that allow software agents and intelligent mechanisms to process and reason the acquired data.

There are currently on-going efforts to define ontologies and to create frameworks to apply semantic Web technologies to sensor networks. The Semantic Sensor Web (SSW) proposes annotating sensor data with spatial, temporal, and thematic semantic metadata. This approach uses the current OGC and SWE specifications and attempts to extend them with semantic web technologies to provide enhanced descriptions to facilitate access to sensor data. W3C Semantic Sensor Networks Incubator Group is also working on developing ontology for describing sensors. Effective description of sensor, observation and measurement data and utilizing semantic Web technologies for this purpose, are fundamental steps to the construction of semantic sensor networks.

However, associating this data to the existing concepts on the Web and reasoning the data is also an important task to make this information widely available for different applications, front-end services and data consumers.

Semantics allow machines to interpret links and relations between different attributes of a sensor description and also other resources. Utilizing and reasoning this information enables the integration of the data as networked knowledge. On a large scale this machine interpretable information (i.e., semantics) is a key enabler and necessity for the semantic sensor networks. Emergence of sensor data as linked-data enables sensor network providers

and data consumers to connect sensor descriptions to potentially endless data existing on the Web. By relating sensor data attributes such as location, type, observation and measurement features to other resources on the Web of data, users will be able to integrate physical world data and the logical world data to draw conclusions, create business intelligence, enable smart environments, and support automated decision making systems among many other applications.

The linked-sensor-data can also be queried, accessed and reasoned based on the same principles that apply to linked-data. The principles of using linked data to describe sensor network resources and data in an implementation of an open platform to publish and consume interoperable sensor data is described in.

In general, associating sensor and sensor network data with other concepts (on the Web) and reasoning makes the data information widely available for different applications, front-end services and data consumers. The semantic description allow machines to interpret links and relations between the different attributes of a sensor description and also other data existing on the Web or provided by other applications and resources. Utilizing and reasoning this information enables the integration of the data on a wider scale, known as networked knowledge. This machine-interpretable information (i.e. semantics) is a key enabler for the semantic sensor networks.

Virtual Sensors

A virtual sensor can be considered as a product of spatial, temporal and/or thematic transformation of raw or other virtual sensor producing data with necessary provenance information attached to this transformation. Virtual sensors and actuators are a programming abstraction simplifying the development of decentralized WSN applications. The data acquired by a set of sensors can be collected, processed according to an application-provided aggregation function, and then perceived as the reading of a single virtual sensor. Dually, a virtual actuator provides a single entry point for distributing commands to a set of real actuator nodes. The flow of information between real devices and virtual sensors or actuators is presented in Figure 2.30. We follow that statement with this definition:

• A virtual sensor behaves just like a real sensor, emitting time series data from a specified geographic region with newly defined thematic concepts or observations which the real sensors may not have.

• A virtual sensor may not have any real sensor’s physical properties such as manufacturer or battery power information, but does have other properties, such as: who created it; what methods are used, and what original sensors it is based on.

The virtualization of sensors can be considered at different levels as presented in below figure. At the lowest level are those related with the more local processing of several simple measurements (for example in a sensing node), and at the highest level, the abstract combination of different sensors at the application level (including user-generated virtual sensors).

 

Fig.  Flow of information between real devices and virtual sensors or actuators

 

Fig. Different levels for sensor virtualization.

 

In that sense the development of virtual sensors could be approached following two different degrees of complexity:

• The combination of a limited number of related sensors or measurements to derive new virtual data (usually done at the sensor node or gateway level).

• The complex process of deriving virtual information from a huge space of sensed data (generally at the application level).

Furthermore it is also important to consider that due to the temporal dimension of sensor data most of the processing required to develop virtual sensors is tightly related to the event concept as defined in ISO 19136 “an action that occurs at an instant or over an interval of time", as well as to Event Processing as “creating, deleting, reading and editing of as well as reacting to events and their representations”.

An event, as a message indicating that something of interest happens, is usually specified through an event type as a structure of attribute-value tuples. An important attribute is the event occurrence time or its valid time interval. Timing is generally described using timestamps but its proper management presents important challenges in geographically dispersed distributed systems.

The complexity of deriving virtual information from a large number of sensor data as depicted in below figure, demands the use of proper methods, techniques and tools for processing events while they occur, i.e., in a continuous and timely fashion. Deriving valuable higher-level knowledge from lower-level events has been approached using different technologies from many independent research fields (such as, discrete event simulation, active databases, network management, or temporal reasoning), and in different application 

Fig. Complex event processing (CEP) and event stream processing (ESP).

fields (as business activity monitoring, market data analysis, sensor networks, etc.). Only in recent years has the term Complex Event Processing, CEP, emerged as a discipline of its own and as an important trend in industry applications where it is necessary to detect situations (specified as complex events) that result from a number of correlated (simple) events. CEP

concept will be described in depth hereafter.

More specifically, as represented in Figure 2.32, considering that sensor data is generally delivered as a stream, a sub-form of CEP known as Event Stream Processing (ESP) [119] can be used for searching different patterns in continuous streams of sensor data events. In the near future, some of the main challenges to be solved in the context of Virtual Sensors are:

Seamless integration and interoperability of “real” and “virtual” sensors. This means that virtual sensors should be indistinguishable from real ones for the external or high level applications, but also for other sensors or system modules if necessary. This way, virtual sensors could be fed as input sensors for new virtual ones, making the flexibility and power of this approach almost unlimited.

Support of (input) sensors and measurements heterogeneity. A virtual sensor should, ideally, be capable of handling input sensors of a very different nature. This results in a very powerful mechanism for implementing complex logics, also linking with CEP concepts. The integration of sensors capturing different phenomena may help the implementation of heuristics or artificial intelligence based decision modules, capable of handling aspects that are not homogeneous (not mere statistics functions over homogeneous figures). This also includes the automatic handling or conversion of different units or scales for input sensors measuring a same aspect.

• Definition of virtual sensors based on semantic rules. A first approach for defining virtual sensors is by implementing the programmatic logic or processes associated with the “operation” to be performed by the sensor. But a much richer and more powerful scheme can be obtained if sensors can be defined by “high level” semantic rules (only describing the general behavior or expected results) and implementation steps are automatically generated

(from the rules) or hidden to external users.

Complex Event Processing

A concept linked with the notion and appearance of “Virtual Sensors” is the Complex Event Processing, in the sense that Virtual Sensors can be used to implement “single sensors” from complex and multiple (actual) sensors or various data sources, thus providing a seamless integration and processing of complex events in a sensor (or Data Collection and Analysis) platform or system. Complex event processing (CEP) is an emerging network technology that

creates actionable, situational knowledge from distributed message-based systems, databases and applications in real time or near real time. CEP can provide an organization with the capability to define, manage and predict events, situations, exceptional conditions, opportunities and threats in complex, heterogeneous networks. Many have said that advancements in CEP will help advance the state-of-the-art in end-to-end visibility for operational situational awareness in many business scenarios (The CEP Blog) [120]. These scenarios range from network management to business optimization, resulting in enhanced

situational knowledge, increased business agility, and the ability to more accurately (and rapidly) sense, detect and respond to business events and situations.

CEP is a technology for extracting higher level knowledge from situational information abstracted from processing sensory information and for low-latency filtering, correlating, aggregating, and computing on real-world event data. It is an emerging network technology that creates actionable, situational knowledge from distributed message-based systems, databases and applications in real-time or near real-time.

Types

Most CEP solutions and concepts can be classified into two main categories:

Computation-oriented CEP: Focused on executing on-line algorithms as a response to event data entering the system. A simple example is to continuously calculate an average based on data from the inbound events • Detection-oriented CEP: Focused on detecting combinations of events called event patterns or situations. A simple example of detecting a situation is to look for a specific sequence of events. Some of the research topics for the immediate future in the context of CEP are:

Distributed CEP: Since CEP core engines usually require powerful hardware and complex input data to consider, it is not easy to design and implement distributed systems capable of taking consistent decisions from non-centralized resources.

Definition of standardized interfaces: Currently, most of the CEP solutions are totally proprietary and not compliant with any type of standard format or interface. In addition, it is not easy to integrate these processes in other systems in an automated way. It is essential to standardize input and output interfaces in order to make CEP systems interoperable among themselves (thus enabling exchanging of input events and results) and to ease integration of CEP in other systems, just as any other step in the transformation or processing of data.

Improved security and privacy policies: CEP systems often imply the handling of “private” data that are incorporated to decision taking or elaboration of more complex data. It is necessary that all processes and synthetic data can be limited by well-defined rules and security constraints (that must be measurable, traceable and verifiable).

 

Security, Privacy & Trust

The Internet of Things presents security-related challenges that are identified in the IERC 2010 Strategic Research and Innovation Roadmap but some elaboration is useful as there are further aspects that need to be addressed by the research community. While there are a number of specific security, privacy and trust challenges in the IoT, they all share a number of transverse non-functional requirements:

• Lightweight and symmetric solutions, Support for resource constrained devices

• Scalable to billions of devices/transactions

Solutions will need to address federation/administrative co-operation

• Heterogeneity and multiplicity of devices and platforms

• Intuitively usable solutions, seamlessly integrated into the real world

Trust for IoT

As IoT-scale applications and services will scale over multiple administrative domains and involve multiple ownership regimes, there is a need for a trust framework to enable the users of the system to have confidence that the information and services being exchanged can indeed be relied upon. The trust framework needs to be able to deal with humans and machines as users, i.e. it needs to convey trust to humans and needs to be robust enough to be used by machines without denial of service. The development of trust frameworks that address this requirement will require advances in areas such as:

• Lightweight Public Key Infrastructures (PKI) as a basis for trust management. Advances are expected in hierarchical and cross certification concepts to enable solutions to address the scalability requirements.

• Lightweight key management systems to enable trust relationships to be established and the distribution of encryption materials using minimum communications and processing resources, as is consistent with the resource constrained nature of many IoT devices.

• Quality of Information is a requirement for many IoT-based systems where metadata can be used to provide an assessment of the reliability of IoT data.

• Decentralized and self-configuring systems as alternatives to PKI for establishing trust e.g. identity federation, peer to peer.

• Novel methods for assessing trust in people, devices and data, beyond reputation systems. One example is Trust Negotiation. Trust Negotiation is a mechanism that allows two parties to automatically negotiate, on the basis of a chain of trust policies, the minimum level of trust required to grant access to a service or to a piece of information.

• Assurance methods for trusted platforms including hardware, software, protocols, etc.

• Access Control to prevent data breaches. One example is Usage Control, which is the process of ensuring the correct usage of certain information according to a predefined policy after the access to information is granted.

Security for IoT

As the IoT becomes a key element of the Future Internet and a critical national/international infrastructure, the need to provide adequate security for the IoT infrastructure becomes ever more important. Large-scale applications and services based on the IoT are increasingly vulnerable to disruption from attack or information theft. Advances are required in several areas to make the IoT secure from those with malicious intent, including.

• DoS/DDOS attacks are already well understood for the current Internet, but the IoT is also susceptible to such attacks and will require specific techniques and mechanisms to ensure

that transport, energy, city infrastructures cannot be disabled or subverted.

• General attack detection and recovery/resilience to cope with IoT specific threats, such as compromised nodes, malicious code hacking attacks.

• Cyber situation awareness tools/techniques will need to be developed to enable IoT-based infrastructures to be monitored. Advances are required to enable operators to adapt the protection of the IoT during the lifecycle of the system and assist operators to take the most appropriate protective action during attacks.

• The IoT requires a variety of access control and associated accounting schemes to support the various authorization and usage models that are required by users. The heterogeneity and diversity of the devices/gateways that require access control will require new lightweight schemes to be developed.

• The IoT needs to handle virtually all modes of operation by itself without relying on human control. New techniques and approaches e.g. from machine learning, are required to lead to a self-managed IoT.

Privacy for IoT As much of the information in an IoT system may be personal data, there

is a requirement to support anonymity and restrictive handling of personal information.

There are a number of areas where advances are required:

• Cryptographic techniques that enable protected data to be stored processed and shared, without the information content being accessible

to other parties. Technologies such as homomorphic and searchable encryption are potential candidates for developing such approaches.

• Techniques to support Privacy by Design concepts, including data minimization, identification, authentication and anonymity.

• Fine-grain and self-configuring access control mechanism emulating the real world. There are a number of privacy implications arising from the ubiquity and pervasiveness of IoT devices where further research is required, including:

• Preserving location privacy, where location can be inferred from things associated with people.

• Prevention of personal information inference, that individual would wish to keep private, through the observation of IoT-related exchanges.

• Keeping information as local as possible using decentralized computing and key management.

• Use of soft identities, where the real identity of the user can be used to generate various soft identities for specific applications. Each soft identity can be designed for a specific context or application without revealing unnecessary information, which can lead to privacy breaches.

 

Device Level Energy Issues

 

One of the essential challenges in IoT is how to interconnect “things” in an interoperable way while taking into account the energy constraints, knowing that the communication is the most energy consuming task on devices. RF solutions for a wide field of applications in the Internet of Things have been released over the last decade, led by a need for integration and low power consumption.

Low Power Communication

Several low power communication technologies have been proposed from different standardization bodies. The most common ones are:

IEEE 802.15.4 has developed a low-cost, low-power consumption, low complexity, low to medium range communication standard at the link and the physical layers for resource constrained devices.

Bluetooth low energy (Bluetooth LE) is the ultra-low power version of the Bluetooth technology that is up to 15 times more efficient than Bluetooth.

Ultra-Wide Bandwidth (UWB) Technology is an emerging technology in the IoT domain that transmits signals across a much larger frequency range than conventional systems. UWB, in addition to its communication capabilities, it can allow for high precision ranging of devices in IoT applications.

RFID/NFC proposes a variety of standards to offer contact less solutions. Proximity cards can only be read from less than 10 cm and follows the ISO 14443 standard and is also the basis of the NFC standard. RFID tags or vicinity tags dedicated to identification of objects have a reading distance which can reach 7 to 8 meters. Nevertheless, front-end architectures have remained traditional and there is now a demand for innovation. Regarding the ultra-low consumption target, super-regenerative have proven to be very energetically efficient architectures used for Wake-Up receivers. It remains active permanently at very low power consumption, and can trigger a signal to wake up a complete/standard receiver. In this field, standardization is required, as today only proprietary solutions exist, for an actual gain in the overall market to be significant. On the other hand, power consumption reduction of an RF full-receiver can be envisioned, with a target well below 5mW to enable very small form factor and long life-time battery. Indeed, targeting below 1mW would then enable support from energy harvesting systems enabling energy autonomous RF communications. In addition to this improvement, lighter communication protocols should also be envisioned as the frequent synchronization requirement makes frequent activation of the RF link mandatory, thereby overhead in the power consumption.

It must also be considered that recent advances in the area of CMOS technology beyond 90 nm,even 65nm nodes, leads to new paradigms in the field of RF communication. Applications which require RF connectivity are growing as fast as the Internet of Things, and it is now economically viable to propose this connectivity solution as a feature of a wider solution. It is already the case for the micro-controller which can now easily embed a Zig-Bee or Bluetooth RF link, and this will expand to meet other large volume applications sensors. Progressively, portable RF architectures are making it easy to add the RF feature to existing devices. This will lead to RF heavily exploiting digital blocks and limiting analogue ones, like passive/inductor silicon consuming elements, as these are rarely easy to port from one technology to another. Nevertheless, the same performance will be required so receiver architectures will have to efficiently digitalize the signal in the receiver or transmitter chain. In this direction, Band-Pass Sampling solutions are promising as the signal is quantized at a much lower frequency than the Nyquist one, related to deep under-sampling ratio. Consumption is therefore greatly reduced compared to more traditional early-stage sampling processes, where the sampling frequency is much lower.

Continuous-Time quantization has also been regarded as a solution for high-integration and easy portability. It is an early-stage quantization as well, but without sampling. Therefore, there is no added consumption due to the clock, only a signal level which is considered. These two solutions are clear evolutions to pave the way to further digital and portable RF solutions.

Cable-powered devices are not expected to be a viable option for IoT devices as they are difficult and costly to deploy. Battery replacements in devices are either impractical or very costly in many IoT deployment scenarios. As a consequence, for large scale and autonomous IoT, alternative energy sourcing using ambient energy should be considered.

Fig.  Ambient sources’ power densities before conversion.

 

Energy Harvesting

Four main ambient energy sources are present in our environment: mechanical energy, thermal energy, radiant energy and chemical energy. These sources are characterized by different power densities shown in above figure.

Energy harvesting (EH) must be chosen according to the local environment. For outside or luminous indoor environments, solar energy harvesting is the most appropriate solution. In a closed environment thermal or mechanical energy may be a better alternative. It is mainly the primary energy source power density in the considered environment that defines the electrical output power that can be harvested and not the transducer itself. The figure also shows that, excluding “sun-outside”, 10–100μW is a fair order of magnitude for 1cm2 or 1cm3-EH output power. Low power devices are expected to require 50mW in transmission mode and less in standby or sleep modes. EH devices cannot supply this amount of energy in a continuous active mode, but instead intermittent operation mode can be used in EH-powered devices. The sensor node’s average power consumption corresponds to the total amount of energy needed for one measurement cycle multiplied by the frequency of the operation. For example, harvesting 100μW during 1 year corresponds to a total amount of energy equivalent to 1 g of lithium.

Considering this approach of looking at energy consumption for one measurement instead of average power consumption, it results that, today:

• Sending 100 bits of data consumes about 5μJ,

• Measuring acceleration consumes about 50μJ,

• Making a complete measurement: measure + conversion + emission consume 250–500μJ.

Therefore, with 100μW harvested continuously, it is possible to perform a complete measurement every 1–10 seconds. This duty cycle can be sufficient for many applications. For other applications, basic functions’ power consumptions are expected to be reduced by 10 to 100 within 10 years; which will enable continuous running mode of EH-powered IoT devices.

Even though many developments have been performed over the last 10 years, energy harvesting—except PV cells—is still an emerging technology that has not yet been adopted by industry. Nevertheless, further improvements of present technologies should enable the needs of IoT to be met.

An example of interoperable wireless standard that enables switches, gateways and sensors from different manufacturers to combine seamlessly and wireless communicates with all major wired bus systems such as KNX, LON, BAC net or TCP/IP is presented in.

The energy harvesting wireless sensor solution is able to generate a signal from an extremely small amount of energy. From just 50μWs a standard energy harvesting wireless module can easily transmit a signal 300 meters (in a free field).

Future Trends and Recommendations

In the future, the number and types of IoT devices will increase, therefore inter-operability between devices will be essential. More computation and yet less power and lower cost requirements will have to be met. Technology integration will be an enabler along with the development of even lower power technology and improvement of battery efficiency. The power consumption of computers over the last 60 years was analyzed in and the authors concluded that electrical efficiency of computation has doubled roughly every year and a half. A similar trend can be expected for embedded computing using similar technology over the next 10 years. This would lead to a reduction by an order of 100 in power consumption at same level of computation. Allowing for a 10 fold increase in IoT computation, power consumption should still be reduced by an order of 10. An example of power consumption requirements for different devices is given in below figure.

Fig.  Power consumption requirements for different devices.

 

On the other hand, energy harvesting techniques have been explored to respond to the energy consumption requirements of the IoT domain. For vibration energy harvesters, we expect them to have higher power densities in the future (from 10μW/g to 30μW/g) and to work on a wider frequency bandwidth. A roadmap of vibration energy harvesters is provided in below figure. Actually, the goal of vibration energy harvesters’ researchers is to develop Plug and Play (PnP) devices, able to work in any vibrating environment, within 10 years. In the same time, we expect basic functions’ energy consumption to decrease by at least a factor of 10. All these progresses will allow vibration energy harvesters to attract new markets, from industry to healthcare or defence. The main challenge for thermoelectric solutions is to increase thermoelectric materials’ intrinsic efficiency, in order to convert a higher part of the

few mW of thermal energy available. This efficiency improvement will be mainly performed by using micro and nanotechnologies (such as super lattices or quantum dots).

Fig.  Energy harvesting wireless sensor network.

 

For solar energy harvesting, photovoltaic cells are probably the most advanced and robust solution. They are already used in many applications and for most of them, today’s solutions are sufficient. Yet, for IoT devices, it could be interesting to improve the photovoltaic cells efficiency to decrease photovoltaic cells’ sizes and to harvest energy in even darker places.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

UNIT-II:

INTERNET PRINCIPLES AND COMMUNICATION TECHNOLOGY

Internet Communication:

AN OVERVIEW:

Suppose that you wanted to send a message to the authors of this book, but you didn’t have the postal address, and you didn’t have any way to look up our phone number (because in this example you don’t have the Internet).You remember that we’re from the UK, and London is the biggest city in the UK. So you send a postcard to your cousin Bob, who lives there. Your cousin, sees that the postcard is for some crazy hardware and technology people. So he puts the postcard in an envelope and drops it off at the London Hackspace because the guys there probably know what to do with it. At the Hack space, Jonty picks up the envelope and sees that it’s for some people in Liverpool. Like all good Londoners, Jonty never goes anywhere to the north of atford, but he remembers that Manchester is in the north too. So he calls up the Manchester Digital Laboratory (MadLab), opens the envelope to read the contents, and says, “Hey, I’ve got this message for Adrian and Hakim in Liverpool. Can you pass it on?” The guys at MadLab ask whether anyone knows who we are, and it turns out that Hwa Young does. So the next time she comes to Liverpool, she delivers the postcard to us.
IP
The preceding scenario describes how the Internet Protocol (IP) works. Data is sent from one machine to another in a packet, with a destination address and a source address in a standardized format (a “protocol”). Just like the original sender of the message in the example, the sending machine doesn’t always know the best route to the destination in advance. Most of the time, the packets of data have to go through a number
of intermediary machines, called routers, to reach their destination. The underlying networks aren’t always the same: just as we used the phone, the postal service, and delivery by hand, so data packets can be sent over wired or wireless networks, through the phone system, or over satellite links. In our example, a postcard was placed in an envelope before getting passed onwards. This happens with Internet packets, too. So, an IP packet is a block of data along with the same kind of information you would write on a physical envelope: the name and address of the server, and so on. But if an IP packet ever gets transmitted across your local wired network via an Ethernet cable—the cable that connects your home broadband router or your office local area network (LAN) to a desktop PC—then the whole packet will get bundled up into another type of envelope, an Ethernet Frame, which adds additional information about how to complete the last few steps of its journey to your computer.
Of course, it’s possible that your cousin Bob didn’t know about the London Hackspace, and  then maybe the message would have got stuck with him. You would have had no way to know whether it got there. This is how IP works. There is no guarantee, and you can send only what will fit in a single packet.

TCP
What if you wanted to send longer messages than fit on a postcard? Or wanted to make sure your messages got through? What if everyone agreed that postcards written in green ink meant that we cared about whether they arrived. And that we would always number them, so if we wanted to send longer messages, we could. The person at the other end would be able to put the messages in order, even if they got delivered in the wrong order (maybe you were writing your letter over a number of days, and the day you passed the fifth one on to cousin Bob, he happened to visit Liverpool and passed on that postcard without relaying through London Hack space or Mad Lab). We would send back postcard notifications that just told you which postcards we had received, so you could resend any that went missing. That is basically how the Transmission Control Protocol (TCP) works. The simplest transport protocol on the Internet, TCP is built on top of the basic IP protocol and adds sequence numbers, acknowledgements, and retransmissions. This means that a message sent with TCP can be arbitrarily long and give the sender some assurance that it actually arrived at the destination intact. Because the combination of TCP and IP is so useful, many services are built on it in turn, such as email and the HTTP protocol that transmits information across the World Wide Web.

THE IP PROTOCOL SUITE (TCP/IP)

The combination of TCP and IP is so ubiquitous that we often refer simply to “TCP/IP” to describe a whole suite or stack of protocols layered on top of each other, each layer building on the capabilities of the one below.

> The low-level protocols at the link layer manage the transfer of bits of information across a network link. This could be by an Ethernet cable, by WiFi, or across a telephone network, or even by short-range radio standards such as IEEE 802.15.4 designed to carry data over the Personal Area Network (PAN), that is to say between devices carried by an individual.


> The Internet layer then sits on top of these various links and abstracts away the gory details in favour of a simple destination address.

> Then TCP, which lives in the transport layer, sits on top of IP and extends it with more sophisticated control of the messages passed.

> Finally, the application layer contains the protocols that deal with fetching web pages, sending emails, and Internet telephony. Of these, HTTP is the most ubiquitous for the web, and indeed for communication between Internet of Things devices.

UDP
As you can see, TCP is not the only protocol in the transport layer. Unlike TCP, but as with IP itself, in UDP each message may or may not arrive. No handshake or retransmission occurs, nor is there any delay to wait for messages in sequence. These limitations make TCP preferable for many of the tasks that Internet of Things devices will be used for. The lack of overhead, however, makes UDP useful for applications such as streaming data, which can cope with minor errors but doesn’t like delays. Voice over IP (VoIP)—computer-based telephony, such as Skype is an example of this: missing one packet might cause a tiny glitch in the sound quality, but waiting for several packets to arrive in the right order could
make the speech too jittery to be easy to understand. UDP is also the transport for some very important protocols which provide common, low-level functionality, such as DNS and DHCP, which relate to the discovery and resolution of devices on the network. We look at this topic in detail in the next section.

Fig.2.1 Internet Protocol suite

IP ADDRESSES


We mentioned earlier that the Internet Protocol knows the addresses of the destination and source devices. But what does an “address” consist of? Here is a typical human (or in this case, hobbit) address: Bilbo Baggins “Bag End”, Bag shot Row Hobbiton. The Shire Middle Earth In the world of low-level computer networking, however, numbers are much easier to deal with. So, IP addresses are numbers. In Internet Protocol version 4 (IPv4), almost 4.3 billion IP addresses are possible—4,294,967,296 to be precise, or 232. Though that is  convenient for computers, it’s tough for humans to read, so IP addresses are usually written as four 8-bit numbers separated by dots (from 0.0.0.0 to 255.255.255.255) for example, 192.168.0.1 (which is often the address of your home router) or 8.8.8.8 (which is the address of one of Google’s DNS servers). This “dotted quad” is still exactly equivalent to the 32-bit number. As well as being simply easier for humans to remember, it is also easier to infer information about the address by grouping certain blocks of addresses together. For example, 8.8.8.x — One of several IP ranges assigned to Google. 192.168. x.x A range assigned for private networks. Your home or office network router may well assign IP addresses in this range. 10. x.x.x Another private range. Every machine on the Internet has at least one IP address. That means every computer, every network-connected printer, every smart phone, and every Internet of Things device has one. If you already have a Raspberry Pi, an Arduino board, or any of the other microcontrollers described in Chapters 3 and 4, they will expect to get their own IP address, too. When you consider this fact, those 4 billion addresses suddenly look as if they might not be enough. The private ranges such as 192.168.x.x offer one mitigation to this problem. Your home or office network might have only one publicly visible IP address. However, you could have all the IP addresses in the range 192.168.0.0 to 192.168.255.255 (2^16 = 65,536 addresses) assigned to distinct devices. A better solution to this problem is the next generation of Internet Protocol, IPv6, which we look at later in this chapter.
DNS
Although computers can easily handle 32-bit numbers, even formatted as dotted quads they are easy for most humans to forget. The Domain Name System (DNS) helps our feeble brains navigate the Internet. Domain names, such as the following, are familiar to us from the web, or perhaps from email or other services: google.com

bbc.co.uk
wiley.com
arduino.cc
Each domain name has a top-level domain (TLD), like .com or.uk, which further subdivides into .co.uk and .gov.uk, and so on. This top-level domain knows where to find more information about the domains within it; for example, .com knows where to find google.com and wiley.com. The domains then have information about where to direct calls to individual machines or services. For example, the DNS records for .google.com know where to point you for the following: www.google.com mail.google.com calendar.google.com.
The preceding examples are all instantly recognizable as website names, which is to say you could enter them into your web browser as, for example, http://www.google.com.
But DNS can also point to other services on the Internet—for example: pop3.google.com — For receiving email from Gmail smtp.google.com — For sending email to Gmail ns1.google.com — The address of one of Google’s many DNS servers Configuring DNS is a matter of changing just a few settings. Your registrar (the company that sells you your domain name) often has a control panel to change these settings. You might also run your  own authoritative DNS server. The settings might contain entries like this one for roomofthings.com book A 80.68.93.60 3h This entry means that the address book.roomofthings.com (which hosts the blog for this book) is served by that IP address and will be for the next three hours.


STATIC IP ADDRESS ASSIGNMENT

How do you get assigned an IP address? If you have bought a server-hosting package from an Internet service provider (ISP), you might typically be given a single IP address. But the company itself has been given a block of addresses to assign. Historically, these were ranges of different sizes, typically separated into “classes” of 8 bits, 16 bits, or 24 bits: Class A from 0.x.x.x Class B from 128.0.x.x Class C from 192.0.0.x The class C ranges had a mere 8 bits (256 addresses) assigned to them, while the class A ranges had many more addresses and would therefore be given only to the very largest of Internet organizations. The rigid separation of address ranges into classes was not very efficient; every entity would want to keep enough spare addresses for future expansion, but this means that many addresses would remain unused. With the explosion of the number of devices connecting to the Internet (a
theme throughout this chapter), the scheme has been super ceded since 1993 by Classless Inter-Domain Routing (CIDR), which allows you to specify exactly how many bits of the address are fixed. (See RFCs 1518 and 1519, at http://tools.ietf.org/rfc/.) So, the class A addresses we mentioned above would be equivalent to 0.0.0.0/8, while a class C might be 208.215.179.0/24.
For example, you saw previously that Google had the range 8.8.8.x (which is equivalent to 8.8.8.0/24 in CIDR notation) Google has chosen to give one of its public DNS servers the address 8.8.8.8 from this range, largely because this address is easy to remember. In many cases, however, the system administrator simply assigns server numbers in order. The administrator makes a note of the addresses and updates DNS records and so on to point to these addresses. We call this kind of address static because once assigned it won’t change again without human intervention. Now consider your home network: every time you plug a desktop PC to your router, connect your laptop or phone to the wireless, or switch on your
network-enabled printer, this device has to get an IP address (often in the range 192.168.0.0/16). You could assign an address sequentially yourself, but the typical person at home isn’t a system administrator and may not keep thorough records. If your brother, who used to use the address 192.168.0.5 but hasn’t been home for ages, comes back to find that your new laser printer now has that address, he won’t be able to connect to the Internet.

DYNAMIC IP ADDRESS ASSIGNMENT

Thankfully, we don’t typically have to choose an IP address for every device we connect to a network. Instead, when you connect a laptop, a printer, or even a Twitter-following bubble machine, it can request an IP address from the network itself using the Dynamic Host Configuration Protocol (DHCP). When the device tries to connect, instead of checking its internal configuration for its address, it sends a message to the router asking for an address. The router assigns it an address. This is not a static IP address which belongs to the device indefinitely; rather, it is a temporary “lease” which is selected dynamically according to which addresses are currently available. If the router is rebooted, the lease expires, or the device is switched off, some other device may end up with that IP address. This means that you can’t simply point a DNS entry to a device using DHCP. In general, you can rely on the IP address probably being the same for a given work session, but you shouldn’t hard-code the IP address anywhere that you might try to use it another time, when it might have changed. Even the simplest computing devices such as the Arduino board, which we look at in Chapter-5, can use DHCP. Although the Arduino’s Ethernet library allows you to configure a static IP address, you can also request one via DHCP. Using a static address may be fine for development (if you are the only person connected to it with that address), but for working in groups or preparing a device to be distributed to other people on arbitrary networks, you almost certainly want a dynamic IP address.

IPv6
When IP was standardized; few could have predicted how quickly the 4.3 billion addresses that IPv4 allowed for would be allocated. The expected growth of the Internet of Things can only speed up this trend. If your mobile phone, watch, MP3 player, augmented reality sunglasses, and tele health or sports monitoring devices are all connected to the Internet, then you personally are carrying half a dozen IP addresses already. Perhaps you have a dedicated  wallet server for micropayments? A personal web server that contains your contact details and blog? One or more webcams recording your day? Perhaps rather than a single health monitoring device, you have several distributed across your person, with sensors for temperature, heart rate, insulin levels, and any number of other stimuli. At home you would start with all your electronic devices being connected. But beyond that, you might also have sensors at every door and window for security. More sensitive sound sensors to detect the presence of mice or beetles. Other sensors to check temperature, moisture, and airflow levels for efficiency. It is hard to predict what order of number of Internet connected devices a household might have in the near future. Tens? hundreds? Thousands? Enter IPv6, which uses 128-bit addresses, usually displayed to users as eight groups of four hexadecimal digits—for example, 2001:0db8:85a3:0042:0000:8a2e:0370:7334. The address space (2^128) is so huge that you could assign the same number of addresses as the whole of IPv4 to every person on the planet and barely make a dent in it. The new standard was discussed during the 1980s and finally released in 1996. In 2013, it is still less popular than IPv4. You can find many ways to work around the lack of public IP addresses using subnets, but there is a chicken-and-egg problem with getting people to use IPv6 without ISP support and vice versa. It was originally expected that mobile phones connected to the Internet (another huge growth area) would push this technology over the tipping point. In fact, mobile networks are increasingly using IPv6 internally to route traffic. Although this infrastructure is still invisible to the end user, it does mean that there is already a lot of use below the surface which is stacked up, waiting for a tipping point.

IPv6 and Powering Devices

We can see that an explosion in the number of Internet of Things devices will almost certainly need IPv6 in the future. But we also have to consider the power consumption of all these devices. We know that we can regularly charge and maintain a small handful of devices. At any one moment, we might have a laptop, a tablet, a phone, a camera, and a music player plugged in to charge. The constant juggling of power sockets, chargers, and cables is feasible but fiddly. The requirements for large numbers of devices, however, are very different. The devices should be low power and very reliable, while still being capable of connecting to the Internet. Perhaps to accomplish this, these devices will team together in a mesh network. This is the vision of 6LoWPAN, an IETF working group proposing solutions for “IPv6 over Low power Wireless Personal Area Networks”, using technologies such as IEEE 802.15.4. While a detailed discussion of 6LoWPAN and associated technologies is beyond the scope of this book, we do come back to many related issues, such as maximizing battery life in Chapter 8 on embedded programming.

Conclusion on IPv6

Although IPv6 is, or will be, big news, we do not go into further detail in this book. In 2013, you can find more libraries, more hardware, and more people that can support IPv4, and this is what will be most helpful when you are moving from prototype to production on an Internet of Things device. Even though we are getting close to the tipping point, existing IPv4 services will be able to migrate to IPv6 networks with minimal or possibly no rewriting. If you are working on IPv6 network infrastructure or are an early adopter of 6LoWPAN.

MAC ADDRESSES

As well as an IP address, every network-connected device also has a MAC address, which is like the final address on a physical envelope in our analogy. It is used to differentiate different machines on the same physical network so that they can exchange packets. This relates to the lowest-level “link layer” of the TCP/IP stack. Though MAC addresses are globally unique, they don’t typically get used outside of one Ethernet network (for example, beyond your home router). So, when an IP message is routed, it hops from node to node, and when it finally reaches a node which knows where the physical machine is, that node passes the message to the device associated with that MAC address. MAC stands for Media Access Control. It is a 48-bit number, usually written as six groups of hexadecimal digits, separated  by colons—for example: 01:23:45:67:89:ab Most devices, such as your laptop, come with the MAC address burned into their Ethernet chips. Some chips, such as the Arduino Ethernet’s WizNet, don’t have a hard-coded MAC address, though. This is for production reasons: if the chips are mass produced, they are, of course, identical. So they can’t, physically, contain a distinctive address. The address could be stored in the chip’s firmware, but this would then require every chip to be built with custom code compiled in the firmware. Alternatively, one could provide a simple data chip which stores just the MAC address and have the WizNet chip read that. Obviously, most consumer devices use some similar process to ensure that the machine always starts up with the same unique MAC address. The Arduino board, as a low-cost prototyping platform for developers, doesn’t bother with that nicety, to save time and cost. Yet it does come with a sticker with a MAC address printed on it. Although this might seem a bit odd, there is a good reason for it: that MAC address is reserved and therefore is guaranteed unique if you want to use it. For development purposes, you can simply choose a MAC address that is known not to exist in your network.

TCP AND UDP PORTS

A messenger with a formal invitation for a wealthy family of the Italian Renaissance would go straight to the front entrance to deliver it. A grocer delivering a crate of the first artichokes of the season would go instead to a service entrance, where the crate could be taken quickly to the kitchen without getting in the way of the masters. The following engraving, by John Gilbert, is taken from Shakespeare’s Romeo and Juliet. This reminds us that the house of the Capulet’s has at least one other entrance—on Juliet’s balcony. If Romeo wants to see his beloved, that is the only way to go. If he climbs up the wrong balcony, he’ll either wait outside (the nurse is fast asleep and can’t hear his knocks) or get chased away by the angry father. Similarly, when you send a TCP/IP message over the Internet, you have to send it to the right port. TCP ports, unlike entrances to the Capulet house, are referred to by numbers (from 0 to 65535).

AN EXAMPLE: HTTP PORTS

If your browser requests an HTTP page, it usually sends that request to port 80. The web server is “listening” to that port and therefore replies to it. If you send an HTTP message to a different port, one of several things will happen:

> Nothing is listening to that port, and the machine replies with an “RST” packet (a control sequence resetting the TCP/IP connection) to complain about this.

> Nothing is listening to that port, but the firewall lets the request simply hang instead of  replying. The purpose of this (lack of) response is to discourage attackers from trying to find information about the machine by scanning every port.

> The client has decided that trying to send a message to that port is a bad idea and refuses to o it. Google Chrome does this for a fairly arbitrary list of “restricted ports”.
> The message arrives at a port that is expecting something other than an HTTP message. The server reads the client’s response, decides that it is garbage, and then terminates the connection (or, worse, does a nonsensical operation based on the message). Ports 0–1023 are “well-known ports”, and only a system process or an administrator can connect to them.
Ports 1024–49151 are “registered”, so that common applications can have a usual port number. However, most services are able to bind any port number in this range. The Internet Assigned Numbers Authority (IANA) is responsible for registering the numbers in these ranges. People can and do abuse them, especially in the range 1024–49151, but unless you know what you’re doing, you are better off using either the correct assigned port or (for an entirely custom application) a port above 49151. You see custom port numbers if a machine  has more than one web server; for example, in development you might have another server, bound to port 8080: http://www.example.com:8080 Or if you are developing a website locally, you may be able to test it with a built-in test web server which connects to a free port. For example, Jekyll (the lightweight blog engine we are using for this book’s website) has a test server that runs on port 4000. http://localhost:4000 The secure (encrypted) HTTPS usually runs on port 443. So these two URLs are equivalent: https://www.example.com https://www.example.com:443
OTHER COMMON PORTS

Even if you will rarely need a complete catalogue of all port numbers for services, you can rapidly start to memorize port numbers for the common services that you use daily. For example, you will very likely come across the following ports regularly:

a. 80 HTTP

b. 8080 HTTP (for testing servers)

c. 443 HTTPS

d. 22 SSH (Secure Shell)

e. 23 Telnet

f. 25 SMTP (outbound email)

g. 110 POP3 (inbound email)

h. 220 IMAP (inbound email)

All of these services are in fact application layer protocols


APPLICATION LAYER PROTOCOLS

We have seen examples of protocols at the different layers of the TCP/IP stack, from the low- level communication across wired Ethernet, the low-level IP communication, and the TCP transport layer. Now we come to the highest layer of the stack, the application layer. This is the layer you are most likely to interact with while prototyping an Internet of Things project. It is useful here to pause and flesh out the definition of the word “protocol”. A protocol is a set of rules for communication between computers. It includes rules about how to initiate the conversation and what format the messages should be in. It determines what inputs are understood and what output is transmitted. It also specifies how the messages are sent and authenticated and how to handle (and maybe correct) errors caused by transmission. Bearing this definition in mind, we are ready to look in more detail at some application layer protocols, starting with HTTP.

HTTP
The Internet is much more than just “the web”, but inevitably web services carried over HTTP hold a large part of our attention when looking at the Internet of Things. HTTP is, at its core, a simple protocol. The client requests a resource by sending a command to a URL, with some headers. We use the current version of HTTP, 1.1, in these examples. Let’s try to get a simple document at http://book.roomofthings.com/hello.txt. You can see the result if you open the URL in your web browser. A Browser showing “Hello World” But let’s look at what the browser is actually sending to the server to do this. The basic structure of the request would look like this: GET /hello.txt HTTP/1.1 Host: book.roomofthings.com Notice how the message is written in plain text, in a human-readable way (this might sound obvious, but not all protocols are; the messages could be encoded into bytes in a binary protocol. We specified the GET method because we’re simply getting the page. We then tell the server which resource we want (/hello.txt) and what version of the protocol we’re using. Then on the following lines, we write the headers, which give additional information about the request. The Host header is the only required header in HTTP 1.1. It is used to let a web server that serves multiple virtual hosts point the request to the right place. Well-written clients, such as your web browser, pass other headers. For example, my browser sends the following request: GET /hello.txt HTTP/1.1 Host: book.roomofthings.com Accept: text/html, application/ xhtml+xml,application/ xml;q=0.9,*/*;q=0.8 Accept-Charset: UTF-8,*;q=0.5 Accept- Encoding: gzip,deflate,sdch Accept-Language :en-US,en;q=0.8 Cache-Control: max-age=0 Connection: keep-alive If-Modified-Since: Tue, 21 Aug 2012 21:41:47 GMT If-None-Match:  8a25e-d-4c7cd7e3d1cc0” User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8)
Apple WebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.77 Safari/537.1 The Accept-headers tell the server what kind of content the client is willing to receive and are part of “Content negotiation”. For example, if I had passed Accept-Language: it,en-US,en;q=0.8 the server might agree to give me the Italian version of the site instead, reverting to English only if it doesn’t have that page in Italian The other fields give the server more information about the client (for statistics and for working around known bugs) and manage caching and so on. Finally, the server sends back its response. We already saw what that looked like in the browser, but now let’s look at what the full request/response looks like if we speak the HTTP protocol directly. (Obviously, you rarely have to do this in real life. Even if you are programming an Internet of Things device, you usually have access to code libraries that make the request, and reading of the response, easier.) The request and response cycle Notice how we connect using the telnet command to access port 80 directly. Now that we can see the full request, it looks at first sight as if we’re repeating some information: the hostname book.roomofthings.com. But remember that DNS will resolve the name to an IP address. All the server sees is the request; it doesn’t know that the command that started the request was telnet book.roomofthings.com 80. If the DNS name foo.example.com also pointed at the same machine, the web server might want to be able to respond in a different way to http://foo.example.com/hello.txt. The server replies, giving us a 200 status code (which it summarizes as “OK”; that is, the request was successful). It also identifies itself as an Apache server, tells us the type of content is text/plain, and returns information to help the client cache the content to make future access to the resource more efficient. You may be wondering where the Hypertext part of the protocol is. All we’ve had back so far is text, so shouldn’t we be talking HTML to the server? Of course, HTML documents are text documents too, and they’re just as easy to request. Notice how, for the server, replying with a text file or an HTML document is exactly the same process! The only difference is that the Content-Type is now text/html. It’s up to the client to read that markup and display it appropriately.
The request response cycle with HTML We look at more features of HTTP over the course of this book, but everything is based around this simple request/response cycle!

HTTPS: ENCRYPTED HTTP

We have seen how the request and response are created in a simple text format. If someone eavesdropped your connection (easy to do with tools such as Wire shark if you have access to the network at either end), that person can easily read the conversation. In fact, it isn’t the format of the protocol that is the problem: even if the conversation happened in binary, an attacker could write a tool to translate the format into something readable. Rather, the problem is that the conversation isn’t encrypted. The HTTPS protocol is actually just a mix-up of plain old HTTP over the Secure Socket Layer (SSL) protocol. An HTTPS server listens to a different port (usually 443) and on connection sets up a secure, encrypted connection with the client (using some fascinating mathematics and clever tricks such as the “Diffie–Hellman key exchange”). When that’s established, both sides just speak HTTP to each other as before! This means that a network snooper can find out only the IP address and port number of the request (because both of these are public information in the envelope of the underlying TCP message, there’s no way around that). After that, all it can see is that packets of data are being sent in a request and packets are returned for the response.

OTHER APPLICATION LAYER PROTOCOLS

All protocols work in a roughly similar way. Some cases involve more than just a two-way request and response. For example, when sending email using SMTP, you first need to do the  HELO handshake” where the client introduces itself with a cheery “hello” (SMTP commands are all four letters long, so it actually says “HELO”) and receives a response like “250 Hello example. org pleased to meet you!” In all cases, it is worth spending a little time researching the protocol on Google and Wikipedia to understand in overview how it works. You can usually find a library that abstracts the details of the communication process, and we recommend using that wherever possible. Bad implementations of network protocols will create problems for you and the servers you connect to and may result in bugs or your clients getting banned from useful services. So, it is generally better to use a well-written, well-debugged implementation that is used by many other developers. In general, the only valid reasons for you, the programmer, to ever speak to any application layer protocol directly (that is, without using a library) are

a.       There is no implementation of the protocol for your platform (or the implementation is inefficient, incomplete, or broken).

b.      You want to try implementing it from scratch, for fun.

c.       You are testing, or learning, and want to make a particular request easily.

COSTS VERSUS EASE OF PROTOTYPING

Although familiarity with a platform may be attractive in terms of ease of prototyping, it is also worth considering the relationship between the costs (of prototyping and mass producing) of a platform against the development effort that the platform demands. This trade-off is not hard and fast, but it is beneficial if you can choose a prototyping platform in a performance/capabilities bracket similar to a final production solution. That way, you will be less likely to encounter any surprises over the cost, or even the wholesale viability of your project, down the line. For example, the cheapest possible way of creating an electronic device might currently be an AVR microcontroller chip, which you can purchase from a component supplier for about L3. This amount is just for the chip, so you would have to sweat the details of how to connect the pins to other components and how to flash the chip with new code. For many people, this platform would not be viable for an initial prototype. Stepping upwards to the approximately L20 mark, you could look at an Arduino or similar. It would have exactly the same chip, but it would be laid out on a board with labelled headers to help you wire up components more easily, have a USB port where you could plug in a computer, and have a well-supported IDE to help make programming it easier. But, of course, you are still programming in C++, for reasons of performance and memory. For more money again, approximately L30, you could look at the Beagle Bone, which runs Linux and has enough processing power and RAM to be able to run a high-level programming language: libraries are provided within the concurrent programming toolkit Node.js for JavaScript to manipulate the input/output pins of the board. If you choose not to use an embedded platform, you could think about using a smart phone instead. Smart phones might cost about L300, and although they are a very different beast, they have many of the same features that make the cheaper platforms attractive: connection to the Internet (usually by wireless or 3G phone connection rather than Ethernet), input capabilities (Touch screen, button presses, camera, rather than electronics components), and output capabilities (sound, screen display, vibration). You can often program them in a choice of languages of high or low level, from Objective C and Java, to Python or HTML and JavaScript. Finally, a common or garden PC might be an option for a prototype. These PCs cost from L100 to L1000 and again have a host of Internet connection and I/O possibilities. You can program them in whatever language you already know how to use. Most importantly, you probably already have one lying around. For the first prototype, the cost is probably not the most Important issue: the Smartphone or computer options are particularly convenient if you already have one available, at which point they are effectively zero-cost. Although prototyping a “thing” using a piece of general computing equipment might seem like a sideways step, depending on your circumstances, it may be exactly the right thing to do to show whether the concept works and get people interested in the project, to collaborate on it, or to fund it. At this stage, you can readily argue that doing the easiest thing that could possibly work is entirely sensible. The most powerful platform that you can afford might make sense for now. Of course, if your device has physical interactions (blowing bubbles, turning a clock’s hands, taking input from a dial), you will find that a PC is not optimized for this kind of work. It doesn’t expose GPIO pins (although People have previously kludged this using parallel port). An electronics prototyping board, unsurprisingly, is better suited to this kind of work. We come back to combining both of these options shortly. An important factor to be aware of is that the hardware and programming Choices you make will depend on your skill set, which leads us to the obvious criticism of the idea of “ease of prototyping”, namely “ease... for whom?” For many beginners to hardware development, the Arduino toolkit is a surprisingly good choice. Yes, the input/output choices are basic and require an ability to follow wiring diagrams and, ideally, a basic knowledge of electronics. Yet the interaction from a programming point of view is essentially simple—writing and reading values to and from the GPIO pins. Yes, the language is C++, which in the early twenty-first century is few people’s idea of the best language for beginners. Yet the Arduino toolkit abstracts the calls you make into a setup () function and a loop() function. Even more importantly, the IDE pushes the compiled code onto the device where it just runs, automatically, until you unplug it. The lack of capabilities of the board presents an advantage in the fact that the interaction with it is also streamlined.
Compare this with developing using a computer: if you already know how to develop an application in C#, in Python, or in JavaScript, you have a great place to start. But if you don’t know, you first have to evaluate and choose a language and then work out how to write it, get it going, and make it start automatically. Any one of these tasks may be, strictly speaking, easier than any of the more opaque interactions with an odd looking circuit board, but the freedom of choice adds its own complexities. Another option is to marry the capabilities of a microcontroller to connect to low-level components such as dials, LEDs, and motors while running the hard processing on a computer or phone. A kit such as an Arduino easily connects to a computer via USB, and you can speak to it via the serial port in a standard way in any programming language. Some phones also have this capability. However, because phones, like an Arduino, are devices”, in theory they can’t act as the computer “host” to control the Arduino. (The side of the USB connection usually in charge of things.) The interesting hack used by the Android development kit (ADK), for example, is for the
Arduino to have a USB host shield—that is, it pretends to be the computer end of the connection and so in theory controls the phone. In reality, the phone does the complicated processing and communication with the Internet and so on. As always, there is no single “right answer” but a set of trade-offs. Don’t let this put you off starting a prototype, though. There are really no “wrong answers” either for that; the prototype is something that will get you started, and the experience of making it will teach you much more about the final best platform for your device than any book, even this one, can.

PROTOTYPES AND PRODUCTION

prototyping is a major factor, perhaps the biggest obstacle to getting a project started— caling up to building more than one device, perhaps many thousands of them—brings a whole new set of challenges and questions.

CHANGING EMBEDDED PLATFORM

When you scale up, you may well have to think about moving to a different platform, for cost or size reasons. If you’ve started with a free-form, powerful programming platform, you may find that porting the code to a more restricted, cheaper, and smaller device will bring many Challenges. This issue is something to be aware of. If the first prototype you built on a PC, iPhone, Beagle Bone, or whatever has helped you get investment or collaborators, you may be well placed to go about replicating that compelling functionality on your final target. Of course, if you’ve used a constrained platform in prototyping, you may find that you have to make choices and limitations in your code. Dynamic memory allocation on the 2K that the Arduino provides may not be especially efficient, so how should that make you think about using strings or complex data structures? If you port to a more powerful platform, you may be able to rewrite your code in a more modern, high-level way or simply take advantage of faster processor speed and more RAM. But will the new platform have the same I/O capabilities? And you have to consider the ramping-up time to learn new technologies and languages. In practice, you will often find that you don’t need to change platforms. Instead, you might look at, for example, replacing an Arduino prototyping microcontroller with an AVR chip (the same chip that powers the Arduino) and just those components that you actually need, connected on a custom PCB.

PHYSICAL PROTOTYPES AND MASS PERSONALISATION

Chances are that the production techniques that you use for the physical side of your device won’t translate directly to mass production. However, while the technique might change—injection moulding in place of 3D printing, for example—in most cases, it won’t change what is possible. An aspect that may be of interest is in the way that digital fabrication tools can allow each item to be slightly different, letting you personalize each device in some way. There are challenges in scaling this to production, as you will need to keep producing the changeable parts in quantities of one, but mass personalization, as the approach is called, means you can offer something unique with the accompanying potential to charge a premium.
CLIMBING INTO THE CLOUD

The server software is the easiest component to take from prototype into production. As we saw earlier, it might involve switching from a basic web framework to something more involved (particularly if you need to add user accounts and the like), but you will be able to find an equivalent for whichever language you have chosen. That means most of the business logic will move across with minimal changes. Beyond that, scaling up in the early days will involve buying a more powerful server. If you are running on a cloud computing platform, such as Amazon Web Services, you can even have the service dynamically expand and contract, as demand dictates.

OPEN SOURCE VERSUS CLOSED SOURCE

If you’re so minded, you could spend a lifetime arguing about the definitions of “closed” and “open” source, and some people have, in fact, made a career out of it. Broadly, we’re looking at two issues: Your assertion, as the creator, of your Intellectual Property rights Your users’ rights to freely tinker with your creation We imagine many of this book’s readers will be creative in some sense, perhaps tinkerers, inventors, programmers, or designers. As a creative person, you may be torn between your own desire to learn how things work and modify and re-use them and the worry that if other people were to use that right on your own design/invention/software, you might not get the recognition and earnings that you expect from it. In fact, this tension between the closed and open approaches is rather interesting, especially when applied to a mix of software and hardware, as we find with Internet of Things devices. While many may already have made up their minds, in one or the other direction, we suggest at least thinking about how you can use both approaches in your project.

 


WHY CLOSED?

Asserting Intellectual Property rights is often the default approach, especially for larger companies. If you declared copyright on some source code or a design, someone who wants to market the same project cannot do so by simply reading your instructions and following them. That person would have to instead reverse engineer the functionality of the hardware and software. In addition, simply copying the design slavishly would also infringe copyright. You might also be able to protect distinctive elements of the visual design with trademarks and of the software and hardware with patents. Although getting good legal information on what to protect and how best to enforce those rights is hard and time-consuming, larger companies may well be geared up to take this route. If you are developing an Internet of Things device in such a context, working within the culture of the company may simply be easier, unless you are willing to try to persuade your management, marketing, and legal teams that they should try something different. If you’re working on your own or in a small company, you might simply trademark your distinctive brand and rely on copyright to protect everything else. Note that starting a project as closed source doesn’t prevent you from later releasing it as open source (whereas after you’ve licensed something as open source, you can’t simply revoke that licence). You may have a strong emotional feeling about your Intellectual Property rights: especially if your creativity is what keeps you and your loved ones fed, this is entirely understandable. But it’s worth bearing in mind that, as always, there is a trade-off between how much the right actually help towards this important goal and what the benefits of being more open are.

WHY OPEN?

In the open source model, you release the sources that you use to create the project to the whole world. You might publish the software code to GitHub (http://github.com), the electronic schematics using Fritzing (http://fritzing.org) or SolderPad (http://solderpad.com), and the design of the housing/shell to Thingiverse (http://www.thingiverse.com). If you’re not used to this practice, it might seem crazy: why would you give away something that you care about, that you’re working hard to accomplish? There are several reasons to give away your work: You may gain positive comments from people who liked it. It acts as a public showcase of your work, which may affect your reputation and lead to new opportunities.
People who used your work may suggest or implement features or fix bugs. By generating early interest in your project, you may get support and mindshare of a quality that it would be hard to pay for. Of course, this is also a gift economy: you can use other people’s free and open source contributions within your own project. Forums and chat channels exist all over the Internet, with people more or less freely discussing their projects because doing so helps with one or more of the benefits mentioned here. If you’re simply “scratching an itch” with a project, releasing it as open source may be the best thing you could do with it. A few words of encouragement from someone who liked your design and your blog post about it may be invaluable to get you moving when you have a tricky moment on it. A bug fix from someone who tried using your code in a way you had never thought of may save you hours of unpleasant debugging later. And if you’re very lucky, you might become known as “that bubble machine guy” or get invited to conferences to talk about your LED circuit. If you have a serious work project, you may still find that open source is the right decision, at least for some of your work.

Disadvantages of Open Source

The obvious disadvantage of open source—“but people will steal my idea!”—may, in fact, be less of a problem than you might think. In general, if you talk to people about an idea, it’s hard enough to get them to listen because they are waiting to tell you about their great idea (the selfish cads).If people do use your open source contribution, they will most likely be using it in a way that interests them. The universe of ideas is still, fortunately, very large. However, deciding to release as open source may take more resources. As the saying goes: the shoemaker’s children go barefoot. If you’re designing for other people, you have to make something of a high standard, but for yourself, you often might be tempted to cut corners. When you have a working prototype, this should be a moment of celebration. Then having to go back and fix everything so that you can release it in a form that doesn’t make you ashamed will take time and resources. Of course, the right way to handle this process would be to start pushing everything to an open repository immediately and develop in public. This is much more the “open source way”. It may take some time to get used to but may work for you.
After you release something as open source, you may still have a perceived duty to maintain and support it, or at least to answer questions about it via email, forums, and chatrooms. Although you may not have paying customers, your users are a community that you may want to maintain. It is true that, if you have volunteered your work and time, you are entirely responsible for choosing to limit that whenever you want. But abandoning something before you’ve built up a community around it to pass the reins to cannot be classed as a successful pen source project.

Being a Good Citizen

The idea that there is a “true way” to do open source is worth thinking about. There is in some way a cachet to “doing open source” that may be worth having. Developers may be attracted to your project on that basis. If you’re courting this goodwill, it’s important to make sure that you do deserve it. If you say you have an open platform, releasing only a few libraries, months afterwards, with no documentation or documentation of poor quality could be considered rude. Also, your open source work should make some attempt to play with other open platforms. Making assumptions that lock in the project to a device you control, for example, would be fine for a driver library but isn’t great for an allegedly open project. In some ways, being a good citizen is a consideration to counterbalance the advantages of the gift economy idea. But, of course, it is natural that any economy has its rules of citizenship! Open Source as a Competitive Advantage Although you might be tempted to be very misty-eyed about open source as a community of good citizens and a gift economy, it’s important to understand the possibility of using it to competitive advantage. First, using open source work is often a no-risk way of getting software that has been tested, improved, and debugged by many eyes. As long as it isn’t licensed with an extreme viral license (such as the AGPL), you really have no reason not to use such work, even in a closed source project. Sure, you could build your own microcontroller from parts and write your own library to control servo motors, your own HTTP stack, and a web ramework. Or you could use an Arduino, the Arduino servo libraries and Ethernet stack, and Ruby on Rails, for example. Commercial equivalents may be available for all these examples, but then you have to factor in the cost and rely on a single company’s support forums instead of all the information available on the Internet. Second, using open source aggressively gives your product the chance to gain mindshare. In this book we talk a lot about the Arduino—as you have seen in this chapter; one could easily argue that it isn’t the most powerful platform ever and will surely be improved. It scores many points on grounds of cost but even more so on mindshare. The design is open; therefore, many other companies have produced clones of the board or components such as shields that are compatible with it. This has led to amusing things such as the Arduino header layout “bug” (http://forum.arduino.cc/index.php/topic, 22737.0.html#subject_171839), which is the result of a design mistake that has nevertheless been replicated by other manufacturers to target the same community. If an open source project is good enough and gets word out quickly and appealingly, it can much more easily gain the goodwill and enthusiasm to become a platform. The “geek” community often choose a product because, rather than being a commercial “black box”, it, for example, exposes a Linux shell or can communicate using an open protocol such as XML. This community can be your biggest ally.

 

 

Open Source as a Strategic Weapon

One step further in the idea of open source used aggressively is the idea of businesses using open source strategically to further their interests (and undermine their competitors). In “Commoditizing your complements”(http://www.joelonsoftware.com/articles/ StrategyLetterV.html), software entrepreneur Joel Spolsky argues that many companies that invest heavily in open source projects are doing just that. In economics, the concept of complements defines products and services that are bought in conjunction with your
product—for example, DVDs and DVD players. If the price of one of those goods goes down, then demand for both goods is likely to rise. Companies can therefore use improvements in open source versions of complementary products to increase demand for their products. If you manufacture microcontrollers, for example, then improving the open source software Frame works that run on the microcontrollers can help you sell more chips.
While open sourcing your core business would be risky indeed, trying to standardise things that you use but which are core to your competitor’s business may, in fact, help to undermine that competitor. So Google releasing Android as open source could undermine Apple’s iOS platform. Facebook releasing Open Compute, to help efficiently maintain large data centres, undermines Google’s competitive advantage. Facebook clearly needs efficient data centres. So to open source its code gives the company the opportunity to gain contributions from many clever open source programmers. But it gives nothing away about Facebook’s core algorithms in social graphing. This dynamic is fascinating with the Internet of Things because several components in different spaces interact to form the final product: the physical design, the electronic Components, the microcontroller, the exchange with the Internet, and the back-end APIs and applications. This is one reason why many people are trying to become leaders in the middleware layers, such as Xively (free for developers, but not currently open source, though many non-core features are open).While you are prototyping, these considerations are secondary, but being aware of these issues is worthwhile so that you understand the risks and opportunities involved.

MIXING OPEN AND CLOSED SOURCE

We’ve discussed open sourcing many of your libraries and keeping your core business closed. While many businesses can exist as purely one or the other, you shouldn’t discount having both coexist. As long as you don’t make unfounded assertions about how much you use open Software, it’s still possible to be a “good citizen” who contributes back to some projects whether by contributing work or simply by helping others in forums while also gaining many of the advantages of open source. While both of us tend to be keen on the idea of open source, it’s also true that not all our work is open source. We have undertaken some for commercial clients who wanted to retain IP. Some of the work was simply not polished enough to be worth the extra effort to make into a viable open release. Adrian’s project Bubblino has a mix of licences:

a. Arduino code is open source.

b. Schematics are available but not especially well advertised.

c. Server code is closed source.

The server code was partly kept closed source because some details on the configuration of the Internet of Things device were possibly part of the commercial advantage.

CLOSED SOURCE FOR MASS MARKET PROJECTS

One edge case for preferring closed source when choosing a licence may be when you can realistically expect that a project might be not just successful but huge, that is, a mass market commodity. While “the community” of open source users is a great ally when you are growing a platform by word of mouth, if you could get an existing supply and distribution chain on your side, the advantage of being first to market and doing so cheaper may well be the most important thing. Let’s consider Nest, an intelligent thermostat: the area of smart energy metering and control is one in which many people are experimenting. The moment that an international power company chooses to roll out power monitors to all its customers, such a project would become instantaneously mass market. This would make it a very tempting proposition to copy, if you are a highly skilled, highly geared-up manufacturer in China, for example. If you also have the schematics and full source code, you can even skip the investment required to reverse-engineer the product. The costs and effort required in moving to mass scale show how, for a physical device, the importance of supply chain can affect other considerations. In 2001, Paul Graham spoke compellingly about how the choice of programming language (in his case, Lisp) could leave competitors in the dirt because all of his competitors chose alternative languages with much slower speed of development (www.paulgraham.com/avg.html). Of course, the key factor wasn’t so much about development platform as time to market versus your competitor’s time to market. The tension between open and closed source informs this as well.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

UNIT-3


PROTOTYPING EMBEDDED DEVICES ELECTRONICS


Before we get stuck into the ins and outs of microcontroller and embedded computer boards, let’s address some of the electronics components that you might want to connect to them. Don’t worry if you’re scared of things such as having to learn soldering. You are unlikely to need it for your initial experiments. Most of the prototyping can be done on what are called solderless breadboards. They enable you to build components together into a circuit with just a push-fit connection, which also means you can experiment with different options quickly and easily. When it comes to thinking about the electronics, it’s useful to split them into two main categories:

Sensors: Sensors are the ways of getting information into your device, finding out things about your surroundings.

Actuators: Actuators are the outputs for the device—the motors, lights, and so on, which let your device do something to the outside world. Within both categories, the electronic components can talk to the computer in a number of ways. The simplest is through digital I/O, which has only two states: a button can either be pressed or not; or an LED can be on or off. These states are usually connected via general-purpose input/output (GPIO) pins and map a digital 0 in the processor to 0 volts in the circuit and the digital 1 to a set voltage, usually the voltage that the processor is using to run (commonly 5V or 3.3V).
If you want a more nuanced connection than just on/off, you need an analogue signal. For example, if you wire up a potentiometer to let you read in the position of a rotary knob, you will get a varying voltage, depending on the knob’s location. Similarly, if you want to run a motor at a speed other than off or full speed, you need to feed it with a voltage somewhere between 0V and its maximum rating. Because computers are purely digital devices, you need a way to translate between the analogue voltages in the real world and the digital of the computer.
An analogue-to-digital converter (ADC) lets you measure varying voltages. Microcontrollers often have a number of these converters built in. They will convert the voltage level between 0V and a predefined maximum (often the same 5V or 3.3V the processor is running at, but sometimes a fixed value such as 1V) into a number, depending on the accuracy of the ADC. The Arduino has 10-bit ADCs, which by default measure voltages between 0 and 5V. A voltage of 0 will give a reading of 0; a voltage of 5V would read 1023 (the maximum value that can be stored in a 10-bits); and voltages in between result in readings relative to the voltage. 1V would map to 205; a reading of 512 would mean the voltage was 2.5V; and so on. The flipside of an ADC is a DAC, or digital-to-analogue converter. DACs let you generate varying voltages from a digital value but are less common as a standard feature of microcontrollers. This is due to a technique called pulse-width modulation (PWM), which gives an approximation to a DAC by rapidly turning a digital signal on and off so that the average value is the level you desire. PWM requires simpler circuitry, and for certain applications, such as fading an LED, it is actually the preferred option. For more complicated sensors and modules, there are interfaces such as Serial Peripheral Interface (SPI) bus and Inter-Integrated Circuit (I2C). These standardized mechanisms allow modules to communicate, so sensors or things such as Ethernet modules or SD cards can interface to the microcontroller. Naturally, we can’t cover all the possible sensors and actuators available, but we list some of the more common ones here to give a flavor of what is possible.
SENSORS
Pushbuttons and switches, which are probably the simplest sensors, allow some user input. Potentiometers (both rotary and linear) and rotary encoders enable you to measure movement. Sensing the environment is another easy option. Light-dependent resistors (LDRs) allow measurement of ambient light levels, thermistors and other temperature sensors allow you to know how warm it is, and sensors to measure humidity or moisture levels are easy to build. Microphones obviously let you monitor sounds and audio, but piezo elements (used in certain types of microphones) can also be used to respond to vibration. Distance-sensing modules, which work by bouncing either an infrared or ultrasonic signal off objects, are readily available and as easy to interface to as a potentiometer.

 

 


ACTUATORS
One of the simplest and yet most useful actuators is light, because it is easy to create electronically and gives an obvious output. Light-emitting diodes (LEDs) typically come in red and green but also white and other colours. RGB LEDs have a more complicated setup but allow you to mix the levels of red, green, and blue to make whatever colour of light you want. More complicated visual outputs also are available, such as LCD screens to display text or even simple graphics. Piezo elements, as well as responding to vibration, can be used to create it, so you can use a piezo buzzer to create simple sounds and music. Alternatively, you can wire up outputs to speakers to create more complicated synthesized sounds.

Of course, for many tasks, you might also want to use components that move things in the real world. Solenoids can by used to create a single, sharp pushing motion, which could be useful for pushing a ball off a ledge or tapping a surface to make a musical sound. More complicated again are motors. Stepper motors can be moved in steps, as the name implies. Usually, a fixed number of steps perform a full rotation. DC motors simply move at a given speed when told to. Both types of motor can be one-directional or move in both directions. Alternatively, if you want a motor that will turn to a given angle, you would need a servo. Although a servo is more controllable, it tends to have a shorter range of motion, often 180 or fewer degrees (whereas steppers and DC motors turn indefinitely).

For all the kinds of motors that we’ve mentioned, you typically want to connect the motors to gears to alter the range of motion or convert circular movement to linear, and so on.

SCALING UP THE ELECTRONICS

From the perspective of the electronics, the starting point for prototyping is usually a “breadboard”. This lets you push-fit components and wires to make up circuits without requiring any soldering and therefore makes experimentation easy. When you’re happy with how things are wired up, it’s common to solder the components onto some protoboard, which may be sufficient to make the circuit more permanent and prevent wires from going astray.
Moving beyond the protoboard option tends to involve learning how to lay out a PCB. This task isn’t as difficult as it sounds, for simple circuits at least, and mainly involves learning how to use a new piece of software and understanding some new terminology.
For small production runs, you’ll likely use through-hole components, so called because the legs of the component go through holes in the PCB and tend to be soldered by hand. You will often create your designs as companion boards to an existing microcontroller platform—generally called shields in the Arduino community. This approach lets you bootstrap production without worrying about designing the entire system from scratch.


When you want to scale things even further, moving to a combined board allows you to remove any unnecessary components from the microcontroller board, and switching to surface mount components—where the legs of the chips are soldered onto the same surface as the chip—eases the board’s assembly with automated manufacturing lines.


EMBEDDED COMPUTING BASICS

The rest of this chapter examines a number of different embedded computing platforms, so it makes sense to first cover some of the concepts and terms that you will encounter along the way. Providing background is especially important because many of you may have little or no idea about what a microcontroller is. Although we’ve been talking about computing power getting cheaper and more powerful, you cannot just throw a bunch of PC components into something and call it an Internet of Things product. If you’ve ever opened up a desktop PC, you’ve seen that it’s a collection of discrete modules to provide different aspects of functionality. It has a main motherboard with its processor, one or two smaller circuit boards providing the RAM, and a hard disk to provide the long-term storage. So, it has a lot of components, which provide a variety of general-purpose functionality and which all take up a corresponding chunk of physical space.

MICROCONTROLLERS
Internet of Things devices take advantage of more tightly integrated and miniaturized solutions—from the most basic level of microcontrollers to more powerful system-on-chip (SoC) modules. These systems combine the processor, RAM, and storage onto a single chip, which means they are much more specialized, smaller than their PC equivalents, and also easier to build into a custom design. These microcontrollers are the engines of countless sensors and automated factory machinery. They are the last bastions of 8-bit computing in a world that’s long since moved to 32-bit and beyond. Microcontrollers are very limited in their capabilities—which is why 8-bit microcontrollers are still in use, although the price of 32-bit microcontrollers is now dropping to the level where they’re starting to be edged out. Usually, they offer RAM capabilities measured in kilobytes and storage in the tens of kilobytes. However, they can still achieve a lot despite their limitations. You’d be forgiven if the mention of 8-bit computing and RAM measured in kilobytes gives you flashbacks to the early home computers of the 1980s such as the Commodore 64 or the Sinclair ZX Spectrum. The 8- bit microcontrollers have the same sort of internal workings and similar levels of memory to work with. There have been some improvements in the intervening years, though—the modern chips are much smaller, require less power, and run about five times faster than their 1980s counterparts. Unlike the market for desktop computer processors, which is dominated by two manufacturers (Intel and AMD), the microcontroller market consists of many manufacturers. A better comparison is with the automotive market. In the same way that there are many different car manufacturers, each with a range of models for different uses, so there are lots of microcontroller manufacturers (Atmel, Microchip, NXP, Texas Instruments, to name a few), each with a range of chips for different applications. The ubiquitous Arduino platform is based around Atmel’s AVR ATmega family of microcontroller chips.
The on-board inclusion of an assortment of GPIO pins and ADC circuitry means that microcontrollers are easy to wire up to all manner of sensors, lights, and motors. Because the devices using them are focused on performing one task, they can dispense with most of what we would term an operating system, resulting in a simpler and much slimmer code footprint than that of a SoC or PC solution. In these systems, functions which require greater resource levels are usually provided by additional single purpose chips which at times are more powerful than their controlling microcontroller. For example, the WizNet Ethernet chip used by the Arduino Ethernet has eight times more RAM than the Arduino itself.

SYSTEM-ON-CHIPS
In between the low-end microcontroller and a full-blown PC sits the SoC (for example, the Beagle Bone or the Raspberry Pi). Like the microcontroller, these SoCs combine a processor and a number of peripherals onto a single chip but usually have more capabilities. The processors usually range from a few hundred megahertz, nudging into the gigahertz for top-end solutions, and include RAM measured in megabytes rather than kilobytes. Storage for SoC modules tends not to be included on the chip, with SD cards being a popular solution. The greater capabilities of SoC mean that they need some sort of operating system to marshal their resources. A wide selection of embedded operating systems, both closed and open  source, is available and from both specialized embedded providers and the big OS players, such as Microsoft and Linux. Again, as the price falls for increased computing power, the popularity and familiarity of options such as Linux are driving its wider adoption.


CHOOSING YOUR PLATFORM

How to choose the right platform for your Internet of Things device is as easy a question to answer as working out the meaning of life. This isn’t to say that it’s an impossible question—more that there are almost as many answers as there are possible devices. The platform you choose depends on the particular blend of price, performance, and capabilities that suit what you’re trying to achieve. And just because you settle on one solution, that doesn’t mean somebody else wouldn’t have chosen a completely different set of options to solve the same problem.
Start by choosing a platform to prototype in. The following sections discuss some of the factors that you need to weigh—and possibly play off against each other—when deciding how to build your device.

Processor Speed

The processor speed, or clock speed, of your processor tells you how fast it can process the individual instructions in the machine code for the program it’s running. Naturally, a faster processor speed means that it can execute instructions more quickly. The clock speed is still the simplest proxy for raw computing power, but it isn’t the only one. You might also make a comparison based on millions of instructions per second (MIPS), depending on what numbers are being reported in the datasheet or specification for the platforms you are comparing. Some processors may lack hardware support for floating-point calculations, so if the code involves a lot of complicated mathematics, a by-the-numbers slower processor with hardware floating-point support could be faster than a slightly higher performance processor without it. Generally, you will use the processor speed as one of a number of factors when weighing up similar systems. Microcontrollers tend to be clocked at speeds in the tens of MHz, whereas SoCs run at hundreds of MHz or possibly low GHz. If your project doesn’t require heavyweight processing—for example, if it needs only networking and fairly basic sensing—then some sort of microcontroller will be fast enough. If your device will be crunching lots of data—for example, processing video in real time—then you’ll be looking at a SoC platform.
RAM
RAM provides the working memory for the system. If you have more RAM, you may be able to do more things or have more flexibility over your choice of coding algorithm. If you’re handling large datasets on the device, that could govern how much space you need. You can often find ways to work around memory limitations, either in code or by handing off
processing to an online service. It is difficult to give exact guidelines to the amount of RAM you will need, as it will vary from project to project. However, microcontrollers with less than 1KB of RAM are unlikely to be of interest, and if you want to run standard encryption protocols, you will need at least 4KB, and preferably more. For SoC boards, particularly if you plan to run Linux as the operating system, we recommend at least 256MB.

Networking
How your device connects to the rest of the world is a key consideration for Internet of Things products. Wired Ethernet is often the simplest for the user—generally plug and play— and cheapest, but it requires a physical cable. Wireless solutions obviously avoid that requirement but introduce a more complicated configuration. WiFi is the most widely deployed to provide an existing infrastructure for connections, but it can be more expensive and less optimized for power consumption than some of its competitors. Other short-range wireless can offer better power-consumption profiles or costs than WiFi but usually with the trade-off of lower bandwidth. ZigBee is one such technology, aimed particularly at sensor networks and scenarios such as home automation. The recent Bluetooth LE protocol (also known as Bluetooth 4.0) has a very low power-consumption profile similar to ZigBee’s and could see more rapid adoption due to its inclusion into standard Bluetooth chips included in phones and laptops. There is, of course, the existing Bluetooth standard as another possible choice. And at the boringbut- very-cheap end of the market sit long established options such as RFM12B which operate in the 434 MHz radio spectrum, rather than the 2.4 GHz range of the other options we’ve discussed. For remote or outdoor deployment, little beats simply using the mobile phone networks. For low-bandwidth, higher-latency communication, you could use something as basic as SMS; for higher data rates, you will use the same data connections, like 3G, as a smartphone.

USB
If your device can rely on a more powerful computer being nearby, tethering to it via USB can be an easy way to provide both power and networking. You can buy some of the microcontrollers in versions which include support for USB, so choosing one of them reduces the need for an extra chip in your circuit. Instead of the microcontroller presenting itself as a device, some can also act as the USB “host”. This configuration lets you connect items that would normally expect to be connected to a computer—devices such as phones, for example, using the Android ADK, additional storage capacity, or WiFi dongles. Devices such as WiFi dongles often depend on additional software on the host system, such as networking stacks, and so are better suited to the more computer-like option of SoC. Power Consumption Faster processors are often more power hungry than slower ones. For devices which might be portable or rely on an unconventional power supply (batteries, solar power) depending on where they are installed, power consumption may be an issue. Even with access to mains electricity, the power consumption may be something to consider because lower consumption may be a desirable feature. However, processors may have a minimal power-consumption sleep mode. This mode may allow you to use a faster processor to quickly perform operations and then return to low-power sleep. Therefore, a more powerful processor may not be a disadvantage even in a low-power embedded device.

Interfacing with Sensors and Other Circuitry

In addition to talking to the Internet, your device needs to interact with something else—either sensors to gather data about its environment; or motors, LEDs, screens, and so on, to provide output. You could connect to the circuitry through some sort of peripheral bus—SPI and I2C being common ones—or through ADC or DAC modules to read or write varying voltages; or through generic GPIO pins, which provide digital on/off inputs or outputs. Different microcontrollers or SoC solutions offer different mixtures of these interfaces in differing numbers. Physical Size and Form Factor The continual improvement in manufacturing techniques for silicon chips means that we’ve long passed the point where the limiting factor in the size of a chip is the amount of space required for all the transistors and other components that make up the circuitry on the silicon. Nowadays, the size is governed by the number of connections it needs to make to the surrounding components on the PCB. With the traditional through-hole design, most commonly used for homemade circuits, the legs of the chip are usually spaced at 0.1" intervals. Even if your chip has relatively few connections to the surrounding circuit—16 pins is nothing for such a chip—you will end up with over 1.5" (~4cm) for the perimeter of your chip. More complex chips can easily run to over a hundred connections; finding room for a chip with a 10" (25cm) perimeter might be a bit tricky! You can pack the legs closer together with surface-mount technology because it doesn’t require holes to be drilled in the board for connections. Combining that with the trick of hiding some of the connections on the underside of the chip means that it is possible to use the complex designs without resorting to PCBs the size of a table. The limit to the size that each connection can be reduced to is then governed by the capabilities and tolerances of your manufacturing process. Some surface-mount designs are big enough for home-etched PCBs and can be hand-soldered. Others require professionally produced PCBs and accurate pick-and-place machines to locate them correctly. Due to these trade-offs in size versus manufacturing complexity, many chip designs are available in a number of different form factors, known as packages. This lets the circuit designer choose the form that best suits his particular application. All three chips pictured in the following figure provide identical functionality because they are all AVR ATmega328 microcontrollers. The one on the left is the through-hole package, mounted here in a socket so that it can be swapped out without soldering. The two others are surface mount, in two different packages, showing the reduction in size but at the expense of ease of soldering.
Looking at the ATmega328 leads us nicely into comparing some specific embedded computing platforms. We can start with a look at one which so popularized the ATmega328 that a couple of years ago it led to a worldwide shortage of the chip in the through-hole package, as for a short period demand outstripped supply.

ARDUINO
Without a doubt, the poster child for the Internet of Things, and physical computing in general, is the Arduino. These days the Arduino project covers a number of microcontroller boards, but its birth was in Ivrea in Northern Italy in 2005. A group from the Interaction Design Institute Ivrea (IDII) wanted a board for its design students to use to build interactive projects. An assortment of boards was around at that time, but they tended to be expensive, hard to use, or both. An Arduino Ethernet board, plugged in, wired up to a circuit and ready for use. So, the team put together a board which was cheap to buy—around L20— and included an onboard serial connection to allow it to be easily programmed. Combined with an extension of the Wiring software environment, it made a huge impact on the world of physical computing. A decision early on to make the code and schematics open source meant that the Arduino board could outlive the demise of the IDII and flourish. It also meant that people could adapt and extend the platform to suit their own needs. As a result, an entire ecosystem of boards, add-ons, and related kits has flourished. The Arduino team’s focus on simplicity rather than raw performance for the code has made the Arduino the board of choice in almost every beginner’s physical computing project, and the open source ethos has encouraged the community to share circuit diagrams, parts lists, and source code. It’s almost the case that whatever your project idea is, a quick search on Google for it, in combination with the word “Arduino”, will throw up at least one project that can help bootstrap what you’re trying to achieve. If you prefer learning from a book, we recommend picking up a copy of Arduino For Dummies, by John Nussey (Wiley, 2013).
The “standard” Arduino board has gone through a number of iterations: Arduino NG, Diecimila, Duemilanove, and Uno. The Uno features an ATmega328 microcontroller and a USB socket for connection to a computer. It has 32KB of storage and 2KB of RAM, but don’t let those meagre amounts of memory put you off; you can achieve a surprising amount despite the limitations. The Uno also provides 14 GPIO pins (of which 6 can also provide PWM output) and 6 10-bit resolution ADC pins. The ATmega’s serial port is made available through both the IO pins, and, via an additional chip, the USB connector. If you need more space or a greater number of inputs or outputs, look at the Arduino Mega 2560. It marries a more powerful ATmega microcontroller to the same software environment, providing 256KB of Flash storage, 8KB of RAM, three more serial ports, a massive 54 GPIO pins (14 of those also capable of PWM) and 16 ADCs. Alternatively, the more recent Arduino Due has a 32-bit ARM core microcontroller and is the first of the Arduino boards to use this architecture. Its specs are similar to the Mega’s, although it ups the RAM to 96KB.


DEVELOPING ON THE ARDUINO


More than just specs, the experience of working with a board may be the most important factor, at least at the prototyping stage. As previously mentioned, the Arduino is optimized for simplicity, and this is evident from the way it is packaged for use. Using a single USB cable, you can not only power the board but also push your code onto it, and (if needed) communicate with it—for example, for debugging or to use the computer to store data retrieved by the sensors connected to the Arduino. Of course, although the Arduino was at the forefront of this drive for ease-ofuse, most of the microcontrollers we look at in this chapter attempt the same, some less successfully than others Integrated Development Environment You usually develop against the Arduino using the integrated development environment (IDE) that the team supply at http://arduino.cc. Although this is a fully functional IDE, based on the one used for the Processing language (http://processing.org/), it is very simple to use. Most Arduino projects consist of a single file of code, so you can think of the IDE mostly as a simple file editor. The controls that you use the most are those to check the code (by compiling it) or to push code to the board. Pushing Code Connecting to the board should be relatively straightforward via a USB cable. Sometimes you might have issues with the drivers (especially on some versions of Windows) or with permissions on the USB port (some Linux packages for drivers don’t add you to the dialout group), but they are usually swiftly resolved once and for good. After this, you need to choose the correct serial port (which you can discover from system logs or select by trial and error) and the board type (from the ppropriate menus, you may need to look carefully at the labelling on your board and its CPU to  etermine which option to select). When your setup is correct, the process of pushing code is generally simple: first, the code is checked and compiled, with any compilation errors reported to you. If the code compiles successfully, it gets transferred to the Arduino and stored in its flash memory. At this point, the Arduino reboots and starts running the new code.
Operating System

The Arduino doesn’t, by default, run an OS as such, only the bootloader, which simplifies the code-pushing process described previously. When you switch on the board, it simply runs the code that you have compiled until the board is switched off again (or the code crashes). It is, however, possible to upload an OS to the Arduino, usually a lightweight real-time operating system (RTOS) such as FreeRTOS/DuinOS. The main advantage of one of these operating systems is their built-in support for multitasking. However, for many purposes, you can achieve reasonable results with a simpler task-dispatching library. If you dislike the simple life, it is even possible to compile code without using the IDE but by using the toolset for the Arduino’s chip—for example, for all the boards until the recent ARM-based Due, the avrgcc toolset. The avr-gcc toolset (www.nongnu.org/avr-libc/) is the collection of programs that let you compile code to run on the AVR chips used by the rest of the Arduino boards and flash the resultant executable to the chip. It is used by the Arduino IDE behind the scenes but can be used directly, as well.

Language
The language usually used for Arduino is a slightly modified dialect of C++ derived from the Wiring platform. It includes some libraries used to read and write data from the I/O pins provided on the Arduino and to do some basic handling for “interrupts” (a way of doing multitasking, at a very low level). This variant of C++ tries to be forgiving about the ordering of code; for example, it allows you to call functions before they are defined. This alteration is just a nicety, but it is useful to be able to order things in a way that the code is easy to read and maintain, given that it tends to be written in a single file. The code needs to provide only two routines:

setup(): This routine is run once when the board first boots. You could use it to set the modes of I/O pins to input or output or to prepare a data structure which will be used throughout the program.

loop(): This routine is run repeatedly in a tight loop while the Arduino is switched on. Typically, you might check some input, do some calculation on it, and perhaps do some output in response. To avoid getting into the details of programming languages in this chapter, we just compare a simple example across all the boards—blinking a single LED: // Pin 13 has an LED connected on most Arduino boards. // give it a name: int led = 13; // the setup routine runs once when you press reset: void setup() {// initialize the digital pin as an output.pinMode(led, OUTPUT);}// the loop routine runs over and over again forever:void loop()digitalWrite(led, HIGH); // turn the LED on delay(1000); // wait for a second digitalWrite(led, LOW); // turn the LED off delay(1000); // wait for a second} Reading through this code, you’ll see that the setup() function does very little; it just sets up that pin number 13 is the one we’re going to control (because it is wired up to an LED).
Then, in loop(), the LED is turned on and then off, with a delay of a second between each  lick of the (electronic) switch. With the way that the Arduino environment works, whenever it reaches the end of one cycle—on; wait a second; off; wait a second—and drops out of the loop() function, it simply calls loop() again to repeat the process.

Debugging
Because C++ is a compiled language, a fair number of errors, such as bad syntax or failure to declare variables, are caught at compilation time. Because this happens on your computer, you have ample opportunity to get detailed and possibly helpful information from the compiler about what the problem is. Although you need some debugging experience to be able to identify certain compiler errors, others, like this one, are relatively easy to understand:
Blink.cpp: In function ‘void loop()’:Blink:21: error:’digitalWritee’ was not declared in this scope. The function loop(), we deliberately misspelled the call to digitalWrite. When the code is pushed to the Arduino, the rules of the game change, however. Because the Arduino isn’t generally connected to a screen, it is hard for it to tell you when something goes wrong. Even if the code compiled successfully, certain errors still happen. An error could be raised that can’t be handled, such as a division by zero, or trying to access the tenth element of a 9-element list. Or perhaps your program leaks memory and eventually just stops working. Or (and worse) a programming error might make the code continue to work dutifully but give entirely the wrong results. If Bubblino stops blowing bubbles, how can we distinguish between the following cases? Nobody has mentioned us on Twitter. The Twitter search API has stopped working. Bubblino can’t connect to the Internet. Bubblino has crashed due to a programming error. Bubblino is working, but the motor of the bubble machine has failed. Bubblino is powered off. Adrian likes to joke that he can debug many problems by looking at the flashing lights at Bubblino’s Ethernet port, which flashes while Bubblino connects to DNS and again when it connects to Twitter’s search API, and so on. (He also jokes that we can discount the “programming error” option and that the main reason the motor would fail is that Hakim has poured bubble mix into the wrong hole. Again.) But while this approach might help distinguish two of the preceding cases, it doesn’t help with the others and isn’t useful if you are releasing the product into a mass market! The first commercially available version of the WhereDial has a bank of half a dozen LEDs specifically for consumer-level debugging. In the case of an error, the pattern of lights showing may help customers fix their problem or help flesh out details for a support request. Runtime programming errors may be tricky to trap because although the C++ language has exception handling, the avr-gcc compiler doesn’t support it (probably due to the relatively high memory “cost” of handling exceptions); so the Arduino platform doesn’t let you use the usual try...catch... logic. Effectively, this means that you need to check your data before using it: if a number might conceivably be zero, check that before trying to divide by it. Test that your indexes are within bounds. To avoid memory leaks, look at the tips on writing code for embedded devices in Chapter 8, “Techniques for Writing Embedded Code”. But code isn’t, in general, created perfect: in the meantime you need ways to identify where the errors are occurring so that you can bullet-proof them for next time. In the absence of a screen, the Arduino allows you to write information over the USB cable using Serial.write(). Although you can use the facility to communicate all kinds of data, debugging information can be particularly useful. The Arduino IDE provides a serial monitor which echoes the data that the Arduino has sent over the USB cable. This could include any textual information, such as logging information, comments, and details about the data that the Arduino is receiving and processing (to double- heck that your calculations are doing the right thing). Rear view of a transparent WhereDial. The bank of LEDs can be seen in the middle of the green board, next to the red “error” LED.

RASPBERRY PI

The Raspberry Pi, unlike the Arduino, wasn’t designed for physical computing at all, but rather, for education. The vision of Eben Upton, trustee and cofounder of the Raspberry Pi Foundation, was to build a computer that was small and inexpensive and designed to be programmed and experimented with, like the ones he’d used as a child, rather than to passively consume games on. The Foundation gathered a group of teachers, programmers, and hardware experts to thrash out these ideas from 2006. While working at Broadcom, Upton worked on the Broadcom BCM2835 system-on-chip, which featured an exceptionally powerful graphics processing unit (GPU), capable of high-definition video and fast graphics rendering. It also featured a low-power, cheap but serviceable 700 MHz ARM CPU, almost tacked on as an afterthought. Upton described the chip as “a GPU with ARM elements grafted on” (www.gamesindustry.biz/articles/digitalfoundry-inside-raspberry-pi). A Raspberry Pi Model B board. The micro USB connector only provides power to the board; the USB connectivity is provided by the USB host connectors (centre-bottom and centre-right). The project has always taken some inspiration from a previous attempt to improve computer literacy in the UK: the “BBC Micro” built by Acorn in the early 1980s. This computer was invented precisely because the BBC producers tasked with creating TV programmes about programming realised that there wasn’t a single cheap yet powerful computer platform that was sufficiently widespread in UK schools to make it a sensible topic for their show. The model names of the Raspberry Pi, “Model A” and “Model B”, hark back to the different versions of the BBC Micro. Many of the other trustees of the Raspberry Pi Foundation, officially founded in 2009, cut their teeth on the BBC Micro. Among them was David Braben, who wrote the seminal game of space exploration, Elite, with its cutting-edge 3D wireframe graphics.
Due in large part to its charitable status, even as a small group, the Foundation has been able to deal with large suppliers and push down the costs of the components. The final boards ended up costing around L25 for the more powerful Model B (with built-in Ethernet connection). This is around the same price point as an Arduino, yet the boards are really of entirely different specifications. The following table compares the specs of the latest, most powerful Arduino model, the Due, with the top-end Raspberry Pi Model B: So, the Raspberry Pi is effectively a computer that can run a real, modern operating system, communicate with a keyboard and mouse, talk to the Internet, and drive a TV/monitor with high-resolution graphics. The Arduino has a fraction of the raw processing power, memory, and storage required for it to run a modern OS. Importantly, the Pi Model B has built-in Ethernet (as does the Arduino Ethernet, although not the Due) and can also use cheap and convenient USB WiFi dongles, rather than having to use an extension “shield”. Note that although the specifications of the Pi are in general more capable than even the top-of-the-range Arduino Due, we can’t judge them as “better” without considering what the devices are for! To see where the Raspberry Pi fits into the Internet of Things ecosystem, we need to look at the process of interacting with it and getting it to do useful physical computing work as an Internet-connected “Thing”, just as we did with the Arduino! We look at this next.
However, it is worth mentioning that a whole host of devices is available in the same target market as the Raspberry Pi: the Chumby Hacker Board, the BeagleBoard, and others, which are significantly more expensive. Yes, they may have slightly better specifications, but for the price difference, there may seem to be very few reasons to consider them above the Raspberry Pi. Even so, a project might be swayed by existing hardware, better tool support for a specific chipset, or ease-of-use considerations. In an upcoming section, we look at one such board, the Beagle Bone, with regards to these issues.

CASES AND EXTENSION BOARDS

Still, due to the relative excitement in the mainstream UK media, as well as the usual hacker and maker echo chambers, the Raspberry Pi has had some real focus. Several ecosystems have built up around the device. Because the Pi can be useful as a general-purpose computer or media centre without requiring constant prototyping with electronic components, one of the first demands enthusiasts have had was for convenient and attractive cases for it. Many makers blogged about their own attempts and have contributed designs to Thingiverse, Instructables, and others. There have also been several commercial projects. The Foundation has deliberately not authorized an “official” one, to encourage as vibrant an ecosystem as possible, although staffers have blogged about an early, well-designed case created by Paul Beech, the designer of the Raspberry Pi logo (http://shop.pimoroni.com/products/pibow).
Beyond these largely aesthetic projects, extension boards and other accessories are already available for the Raspberry Pi. Obviously, in the early days of the Pi’s existence post launch, there are fewer of these than for the Arduino; however, many interesting kits are in development, such as the Gertboard (www.raspberrypi.org/archives/tag/gertboard), designed for conveniently playing with the GPIO pins. Whereas with the Arduino it often feels as though everything has been done already, in the early days of the Raspberry Pi, the situation is more encouraging. A lot of people are doing interesting things with their Pis, but as the platform is so much more high level and capable, the attention may be spread more thinly—from designing cases to porting operating systems to working on media centre plug-ins. Physical computing is just one of the aspects that attention may be paid.

DEVELOPING ON THE RASPBERRY PI Whereas the Arduino’s limitations are in some ways its greatest feature, the number of variables on the Raspberry Pi are much greater, and there is much more of an emphasis on being able to do things in alternative ways. However, “best practices” are certainly developing. Following are some suggestions at time of writing.


Operating System


Although many operating systems can run on the Pi, we recommend using a popular Linux distribution, such as Raspbian: Released by the Raspbian Pi Foundation, Raspbian is a distro based on Debian. This is the default “official” distribution and is certainly a good choice for general work with a Pi. Occidentalis: This is Adafruit’s customised Raspbian. Unlike Raspbian, the distribution assumes that you will use it “headless”—not connected to keyboard and monitor—so you can connect to it remotely by default. (Raspbian requires a brief configuration stage first.) For Internet of Things work, we recommend something such as the Adafruit distro. You’re most probably not going to be running the device with a keyboard and display, so you can avoid the inconvenience of sourcing and setting those up in the first place. The main tweaks that interest us are that The sshd (SSH protocol daemon) is enabled by default, so you can connect to the console remotely. The device registers itself using zero-configuration networking (zeroconf) with the name raspberrypi.local, so you don’t need to know or guess which IP address it picks up from the network in order to make a connection. When we looked at the Arduino, we saw that perhaps the greatest win was the simplicity of the development environment. In the best case, you simply downloaded the IDE and plugged the device into the computer’s USB. (Of course, this elides the odd problem with USB drivers and Internet connection when you are doing Internet of Things work.) With the Raspberry Pi, however, you’ve already had to make decisions about the distro and download it. Now that distro needs to be unpacked on the SD card, which you purchase separately. You should note that some SD cards don’t work well with the Pi; apparently, “Class 10” cards work best. The class of the SD card isn’t always clear from the packaging, but it is visible on the SD card with the number inside a larger circular “C”. At this point, the Pi may boot up, if you have enough power to it from the USB. Many laptop USB ports aren’t powerful enough; so, although the “On” light displays, the device fails to boot. If you’re in doubt, a powered USB hub seems to be the best bet. An Electric Imp (left), next to a micro SD card (centre), and an SD card (right). After you boot up the Pi, you can communicate with it just as you’d communicate with any computer—that is, either with the keyboard and monitor that you’ve attached, or with the Adafruit distro, via ssh as mentioned previously. The following command, from aLinux or Mac command line, lets you log in to the Pi just as you would log in to a remote server: $ ssh root@raspberrypi.local From Windows, you can use an SSH client such as PuTTY (www.chiark.greenend.org.uk/~sgtatham/putty/). After you connect to the device, you can develop a software application for it as easily as you can for any Linux computer. How easy that turns out to be depends largely on how comfortable you are developing for Linux.
Programming Language

One choice to be made is which programming language and environment you want to use. Here, again, there is some guidance from the Foundation, which suggests Python as a good language for educational programming (and indeed the name “Pi” comes initially from Python). Let’s look at the “Hello World” of physical computing, the ubiquitous “blinking lights” example: import RPi.GPIO as GPIO from time import sleep GPIO.setmode(GPIO.BOARD) # set the numbering scheme to be the # same as on the board GPIO.setup(8, GPIO.OUT) # set the GPIO pin 8 to output mode led = False GPIO.output(8, led) # initiate the LED to off while 1: GPIO.output(8, led) led = not led # toggle the LED status on/off for the next # iteration sleep(10) # sleep for one second As you can see, this example looks similar to the C++ code on an Arduino. The only real differences are the details of the modularization: the GPIO code and even the sleep() function have to be specified. However, when you go beyond this level of complexity, using a more expressive “high-level” language like Python will almost certainly make the following tasks easier: Handling strings of character data Completely avoiding having to handle memory management (and bugs related to it) Making calls to Internet services and parsing the data received Connecting to databases and more complex processing Abstracting common patterns or complex behaviors Also, being able to take advantage of readily available libraries on PyPi (https://pypi.python.org/pypi) may well allow simple reuse of code that other people have written, used, and thoroughly tested. So, what’s the catch? As always, you have to be aware of a few trade-offs, related either to the Linux platform itself or to the use of a high-level programming language. Later, where we mention “Python”, the same considerations apply to most higher-level languages, from Python’s contemporaries Perl and Ruby, to the compiled VM languages such as Java and C#. We specifically contrast Python with C++, as the low-level language used for Arduino programming.

Python, as with most high-level languages, compiles to relatively large (in terms of memory usage) and slow code, compared to C++. The former is unlikely to be an issue; the Pi has more than enough memory. The speed of execution may or may not be a problem: Python is likely to be “fast enough” for most tasks, and certainly for anything that involves talking to the Internet, the time taken to communicate over the network is the major slowdown.
However, if the electronics of the sensors and actuators you are working with require split-second timing, Python might be too slow. This is by no means certain; if Bubblino starts blowing bubbles a millisecond later, or the DoorBot unlocks the office a millisecond after you scan your RFID card to authenticate, this delay may be acceptable and not even noticeable.
Python handles memory management automatically. Because handling the precise details of memory allocation is notoriously fiddly, automatic memory management generally results in fewer bugs and performs adequately. However, this automatic work has to be scheduled in and takes some time to complete. Depending on the strategy for garbage collection, this may result in pauses in operation which might affect timing of subsequent events. Also, because the programmer isn’t exposed to the gory details, there may well be cases in which Python quite reasonably holds onto more memory than you might have preferred had you been managing it by hand. In worse cases, the memory may never be released until the process terminates: this is a so-called memory leak. Because an Internet of Things device generally runs unattended for long periods of time, these leaks may build up and eventually end up with the device running out of memory and crashing. (In reality, it’s more likely that such memory leaks happen as a result of programming error in manual memory management.) Linux itself arguably has some issues for “real-time” use. Due to its being a relatively large operating system, with many processes that may run simultaneously, precise timings may vary due to how much CPU priority is given to the Python runtime at any given moment. This hasn’t stopped many embedded programmers from moving to Linux, but it may be a consideration for your case.

An Arduino runs only the one set of instructions, in a tight loop, until it is turned off or crashes.
The Pi constantly runs a number of processes. If one of these processes misbehaves, or two of them clash over resources (memory, CPU, access to a file or to a network port), they may cause problems that are entirely unrelated to your code. This is unlikely (many well-run Linux computers run without maintenance for years and run businesses as well as large parts of the Internet) but may result in occasional, possibly intermittent, issues which are hard to identify and debug. We certainly don’t want to put undue stress on the preceding issues! They are simply trade-offs that may or may not be important to you, or rather more or less important than the features of the Pi and the access to a high-level programming language.
The most important issue, again, is probably the ease of use of the environment. If you’re comfortable with Linux, developing for a Pi is relatively simple. But it doesn’t approach the simplicity of the Arduino IDE. For example, the Arduino starts your code the moment you switch it on. To get the same behaviour under Linux, you could use a number of mechanisms, such as an initialisation script in /etc/init.d. First, you would create a wrapper script—for example, /etc/init.d/ StartMyPythonCode. This script would start your code if it’s called with a start argument, and stop it if called with stop. Then, you need to use the chmod command to mark the script as something the system can run: chmod +x /etc/init.d/StartMyPythonCode. Finally, you register it to run when the machine is turned on by calling sudo update-rc.d StartMyPythonCode defaults. If you are familiar with Linux, you may be familiar with this mechanism for automatically starting services (or indeed have a preferred alternative). If not, you can find tutorials by Googling for “Raspberry Pi start program on boot” or similar. Either way, although setting it up isn’t hard per se, it’s much more involved than the Arduino way, if you aren’t already working in the IT field. Debugging While Python’s compiler also catches a number of syntax errors and attempts to use undeclared variables, it is also a relatively permissive language (compared to C++) which performs a greater number of calculations at runtime. This means that additional classes of programming errors won’t cause failure at compilation but will crash the program when it’s running, perhaps days or months later.Whereas the Arduino had fairly limited debugging capabilities, mostly involving outputting data via the serial port or using side effects like blinking lights, Python code on Linux gives you the advantages of both the language and the OS. You could step through the code using Python’s integrated debugger, attach to the process using the Linux strace command, view logs, see how much memory is being used, and so on. As long as the device itself hasn’t crashed, you may be able to ssh into the Raspberry Pi and do some of this debugging while your program has failed (or is running but doing the wrong thing).
Because the Pi is a general-purpose computer, without the strict memory limitations of the Arduino, you can simply use try... catch... logic so that you can trap errors in your Python code and determine what to do with them. For example, you would typically take the opportunity to log details of the error (to help the debugging process) and see if the unexpected problem can be dealt with so that you can continue running
the code. In the worst case, you might simply stop the script running and have it restart again afresh! Python and other high-level languages also have mature testing tools which allow you to assert expected behaviours of your routines and test that they perform correctly. This kind of automated testing is useful when you’re working out whether you’ve finished writing correct code, and also can be rerun after making other changes, to make sure that a fix in one part of the code hasn’t caused a problem in another part that was working before.


SOME NOTES ON THE HARDWARE

The Raspberry Pi has 8 GPIO pins, which are exposed along with power and other interfaces in a 2-by-13 block of male header pins. Unlike those in the Arduino, the pins in the  Raspberry Pi aren’t individually labelled. This makes sense due to the greater number of components on the Pi and also because the expectation is that fewer people will use the GPIO pins and you are discouraged from soldering directly onto the board. The intention is rather that you will plug a cable (IDC or similar) onto the whole block, which leads to a “breakout board” where you do actual work with the GPIO. Alternatively, you can connect individual pins using a female jumper lead onto a breadboard. The pins are documented on the schematics. A femaleto- male would be easiest to connect from the “male” pin to the “female” breadboard. If you can find only female-to-female jumpers, you can simply place a header pin on the breadboard or make your own female-to-male jumper by connecting a male-to-male with a female-tomale! These jumpers are available from hobbyist suppliers such as Adafruit, Sparkfun, and Oomlout, as well as the larger component vendors such as Farnell. The block of pins provides both 5V and 3.3V outputs. However, the GPIO pins themselves are only 3.3V tolerant. The Pi doesn’t have any over-voltage protection, so you are at risk of breaking the board if you supply a 5V input! The alternatives are to either proceed with caution or to use an external breakout board that has this kind of protection. At the time of writing, we can’t recommend any specific such board, although the Gertboard, which is mentioned on the official site, looks promising. Note that the Raspberry Pi doesn’t have any analogue inputs (ADC), which means that options to connect it to electronic sensors are limited, out of the box, to digital inputs (that is, on/off inputs such as buttons). To get readings from light-sensitive photocells, temperature sensors, potentiometers, and so on, you need to connect it to an external ADC via the SPI bus. You can find instructions on how to do this at, for example, http://learn.adafruit.com/reading-a-analog-in-and-controlling-audio-volumewith-the-raspberry-pi/overview.
We mentioned some frustrations with powering the Pi earlier: although it is powered by a standard USB cable, the voltage transmitted over USB from a laptop computer, a powered USB hub, or a USB charger varies greatly. If you’re not able to power or to boot your Pi, check the power requirements and try another power source.
These team members have been able to publish certain materials, such as PDFs of the Raspberry Pi board schematics, and so on. However, the answer to the question “Is it open hardware?” is currently “Not yet” (www.raspberrypi. org/archives/1090#comment-20585).
It is worth noting that the Broadcom chip is currently harder to source than either the widely available Atmel chips in the Arduino or the Texas Instruments chip in the BeagleBone. This could make it harder to spin up a prototype into a product.


LASER CUTTING

Although the laser cutter doesn’t get the same press attention as the 3D printer, it is arguably an even more useful item to have in your workshop. Three-dimensional printers can produce more complicated parts, but the simpler design process (for many shapes, breaking it into a sequence of two-dimensional planes is easier than designing in three dimensions),greater range of materials which can be cut, and faster speed make the laser cutter a versatile piece of kit. Laser cutters range from desktop models to industrial units which can take a full 8' by 4' sheet in one pass. Most commonly, though, they are floor standing and about the same size as a large photocopier. Most of the laser cutter is given over to the bed; this is a flat area that holds the material to be cut. The bed contains a two-axis mechanism with mirrors and a lens to direct the laser beam to the correct location and focus it onto the material being cut. It
is similar to a flatbed plotter but one that burns things rather than drawing on them. The computer controls the two-axis positioning mechanism and the power of the laser beam. This means that not only can the machine easily cut all manner of intricate patterns, but it can also lower the power of the laser so that it doesn’t cut all the way through. At a sufficiently low power, this feature enables you to etch additional detail into the surface of the piece. You can also etch things at different power levels to achieve different depths of etching, but whilst the levels will be visibly different, it isn’t precise enough to choose a set fraction of a millimeter depth.
CHOOSING A LASER CUTTER

When choosing a laser cutter, you should consider two main features:
The size of the bed: This is the place where the sheet of material sits while it’s being cut, so a
larger bed can cut larger items. You don’t need to think just about the biggest item you might create; a larger bed allows you to buy material in bigger sheets (which is more cost effective), and if you move to small-scale production, it would let you cut multiple units in one pass.
The power of the laser: More powerful lasers can cut through thicker material. For example, the laser cutter at our workplace has a 40W laser, which can cut up to 10mm-thick acrylic. Moving a few models up in the same range, to one with a 60W laser, would allow us to cut 25mmthick acrylic. Depending on what you’re trying to create, you can cut all sorts of different materials in a laser cutter. Whilst felt, leather, and other fabrics are easy to cut, for Internet of Things devices you will probably be looking at something more rigid. Card and, particularly, corrugated cardboard are good for quick tests and prototyping, but MDF, plywood, and acrylic (also commonly known by the brand name Perspex) are the most common choices. Specialised materials are also available for specific purposes. For example, laserable rubber can be used to create ink stamps, and laminate acrylic provides a thin surface in one colour, laminated with a thicker layer in a contrasting colour so that you can etch through the thin layer for crisp, high-contrast detailing and text. Whilst you are able to get laser cutters which can cut metal, they tend to be the more powerful and industrial units. The lower-powered models don’t cut through the metal; and worse, as the shiny surface of many metals does an excellent job of reflecting the laser beam, you run a real risk of damaging the machine. The laser cutters can be used to etch metals, though, if you’ve carefully prepared the reflective surface beforehand with a ceramic coating compound, such as CerMark. Once coated, either from a spray-can or as tape, the laser will fuse the compound with the underlying metal to leave a permanent dark mark. If you don’t have a laser cutter of your own, there is a good chance that your local makerspace or hackspace will have one that you could use. You might even be able to obtain access to one at a local university or college. Failing that, laser-cutting bureau services somewhat like copy shops are becoming
increasingly common. Architects often use these services to help them build architectural models, so that could provide a starting place for your search. If that approach proves fruitless, a number of online providers, such as Ponoko (http://www.ponoko.com), let you upload designs that they cut and then post back to you.

SOFTWARE
The file formats or software which you need to use to provide your design vary across machines and providers. Although some laser-cutting software will let you define an engraving pattern with a bitmap, typically you use some type of vector graphics format. Vector formats capture the drawing as a series of lines and curves, which translate much better into instructions for moving the laser cutter than the grid-like representation of a bitmap. There’s also no loss in fidelity as you resize the image. With a bitmap, as you might have seen if you’ve ever tried blowing up one small part of a digital photo, the details become jagged as you zoom in closely, whereas the vector format knows that it’s still a single line and can redraw it with more detail. CorelDRAW is a common choice for driving the laser cutters themselves, and you can use it to generate the designs too. Other popular options are Adobe Illustrator, as many designers already have a copy installed and are familiar with driving it, and Ink scape, largely because it’s an open source alternative and therefore freely available. The best choice is the one you’re most comfortable working with, or failing that, either the one your laser cutter uses or the one you can afford. When creating your design, you use the stroke (or outline) of the shapes and lines rather than the filled area to define where the laser will cut and etch. The kerf, the width of the cut made by the laser, is about 0.2mm but isn’t something you need to include in the design. A thinner stroke width is better, as it will stop the laser cutter from misinterpreting it as two cuts when you need only one.
Different types of operation—cut versus etch or even different levels of etching—can usually be included in the same design file just by marking them in different colours. Whoever is doing your cutting may have a set convention of colour scheme for different settings, so you should make sure that you follow this convention if that is the case.

3D PRINTING

Additive manufacturing, or 3D printing as it’s often called, is fast becoming one of the most popular forms in rapid prototyping—largely down to the ever-increasing number of personal 3D printers, available at ever-falling costs. Now a number of desktop models, available for less than L500, produce decent quality results. The term additive manufacturing is used because all the various processes which can be used to produce the output start with nothing and add material to build up the resulting model. This is in contrast to subtractive manufacturing techniques such as laser cutting and CNC milling, where you start with more material and cut away the parts you don’t need. Various processes are used for building up the physical model, which affect what materials that printer can use, among other things. However, all of them take a three-dimensional computer model as the input. The software slices the computer model into many layers, each a fraction of a millimeter thick, and the physical version is built up layer by layer. One of the great draws of 3D printing is how it can produce items which wouldn’t be possible with traditional techniques. For example, because you can print interlocking rings without any joins, you are able to use the metal 3D printers to print entire sheets of chain-mail which come out of the printer already connected together. If only the medieval knights had had access to a metal laser-sintering machine, their armour would have been much easier to produce. Another common trick with 3D printing is to print pieces which include moving parts: it is possible to print all the parts at the same time and print them ready-assembled. This effect is achieved with the use of what is called “support material”. In some processes, such as the powder-based methods, this is a side effect of the printing technique; while the print is in progress, the raw powder takes up the space for what will become the air-gap. Afterwards, you can simply shake or blow the loose powder out of your solid print. Other processes, such as the extruded plastic techniques, require you to print a second material, which takes the supporting role. When the print is finished, this support material is either broken off or washed away. (The support material is specifically chosen to dissolve in water or another solution which doesn’t affect the main printing material.)


TYPES OF 3D PRINTING

Lots of innovation is still happening in the world of additive manufacturing, but the following are some of the more common methods of 3D printing in use today:
Fused filament fabrication (FFF): Also known as fused deposition modeling (FDM), this is the type of 3D printer you’re most likely to see at a maker event. The RepRap and MakerBot designs both use this technique, as does the Stratasys at the industrial level. It works by extruding a fine filament of material (usually plastic) from a heated nozzle. The nozzle can be moved horizontally and vertically by the controlling computer, as can the flow of filament through the nozzle. The resulting models are quite robust, as they’re made from standard plastic. However, the surface can have a visible ridging from the thickness of the filament. Laser sintering: This process is sometimes called selective laser sintering (SLS), electron beam melting (EBM), or direct metal laser sintering (DMLS). It is used in more industrial machines but can print any material which comes in powdered form and which can be melted
by a laser. It provides a finer finish than FDM, but the models are just as robust, and they’re even stronger when the printing medium is metal. This technique is used to print aluminium or titanium, although it can just as easily print nylon. MakieLab uses laser-sintered nylon to 3D print the physical versions of its dolls.

Powder bed: Like laser sintering, the powder-bed printers start with a raw material in a powder form, but rather than fusing it together with a laser, the binder is more like a glue which is dispensed by a print head similar to one in an inkjet printer. The Z Corp. machines use this technique and use a print medium similar in texture to plaster. After the printing process, the models are quite brittle and so need post processing where they are sprayed with a hardening solution. The great advantage of these printers is that when the binder is being applied, it can be mixed with some pigment; therefore, full-colour prints in different colours can be produced in one pass.

Laminated object manufacturing (LOM): This is another method which can produce full-colour prints. LOM uses traditional paper printing as part of the process. Because it builds up the model by laminating many individual sheets of paper together, it can print whatever colours are required onto each layer before cutting them to shape and gluing them into place. The Mcor IRIS is an example of this sort of printer.

Stereo lithography and digital light processing: Stereo lithography is possibly the oldest 3D printing technique and has a lot in common with digital light processing, which is enjoying a huge surge in popularity and experimentation at the time of this writing. Both approaches build their models from a vat of liquid polymer resin which is cured by exposure to ultraviolet light. Stereo lithography uses a UV laser to trace the pattern for each layer, whereas digital light processing uses a DLP projector to cure an entire layer at a time. Whilst these approaches are limited to printing with resin, the resultant models are produced to a fine resolution. The combination of this with the relatively low cost of DLP projectors makes this a fertile area for development of more affordable high-resolution printers. Deciding which 3D printer to use is likely to be governed mostly by what kind of machine you have ready access to. The industrial-level machines cost tens of thousands of pounds, so most of us don’t have the luxury of buying one of them. If you do have that sort of budget, we’re jealous; enjoy your shopping! For the rest of us, a few options are available. If you live near to a fab lab or TechShop, they usually have a 3D printer that you are able to use. Similarly, local universities often have such facilities in their engineering or product design departments and might grant you access. You may also find a local bureau service which will print your designs for you; these services are becoming increasingly common. Recently, Staples announced a service to deliver 3D prints for collection in its stores in the Netherlands. The Staples announcement is just another, albeit well-known, entrant into the 3D-printing-by-post market. Shapeways (http://www.shapeways.com/), i.materialise (http://i.materialise.com/), and Ponoko (https://www.ponoko.com/) have all been offering similar services for a while now. You upload your design online, choose how you want it printed, and a few days later receive it in the post. Many of these services even facilitate your selling your designs, with them handling the fulfillment for you. If you don’t need the specialist materials or high resolution of the high-end machines, there’s a good chance that your local hackspace or makerspace will have one of the lower-cost desktop machines; the pricing of these machines is such that buying one of your own is also an option. In fact, for most
prototyping work, one can argue that the greater access and lower cost of materials in that approach far outweigh the disadvantages


CNC MILLING

Computer Numerically Controlled (CNC) milling is similar to 3D printing but is a subtractive manufacturing process rather than additive. The CNC part just means that a computer controls the movement of the milling head, much like it does the extruder in an FDM 3D printer. However, rather than building up the desired model layer by layer from nothing, it starts with a block of material larger than the finished piece and cuts away the parts which aren’t needed—much like a sculptor chips away at a block of stone to reveal the statue, except that milling uses a rotating cutting bit (similar to an electric drill) rather than a chisel. Because cutting away material is easier, CNC mills can work with a much greater range of materials than 3D printers can. You still need an industrial scale machine to work with hardened steel, but wax, wood, plastic, aluminum, and even mild steel can be readily milled with even desktop mills. CNC mills can also be used for more specialized (but useful when prototyping electronic devices) tasks, such as creating custom printed circuit boards. Rather than sending away for your PCB design to be fabricated or etching it with acid, you can convert it into a form which your CNC mill can rout out; that is, the CNC mills away lines from the metal surface on the board, leaving the conductive paths. An advantage of milling over etching the board is that you can have the mill drill any holes for components or mounting at the same time, saving you from having to do it manually afterwards with your drill press. A wide range of CNC mills is available, depending on the features you need and your budget. Sizes range from small mills which will fit onto your desktop through to much larger machines with a bed size measured in metres. There are even CNC mills which fill an entire hangar, but they tend to be bespoke constructions for a very specific task, such as creating moulds for wind turbine blades. Bigger is not always better, though; the challenges of accurately moving the carriage around increase with their size, so smaller mills are usually able to machine to higher tolerances. That said, the difference in resolution is only from high to extremely high. CNC mills can often achieve resolutions of the order of 0.001mm, which is a couple of orders of magnitude better than the current generation of low-end 3D printers. Beyond size and accuracy, the other main attribute that varies among CNC mills is the number of axes of movement they have:

2.5 axis: Whilst this type has three axes of movement—X, Y, and Z—it can move only any two at one time.

3 axis: Like the 2.5-axis machine, this machine has a bed which can move in the X and Y axes,and a milling head that can move in the Z.However, it can move all three at the same time (if the machining instructions call for it).

4 axis: This machine adds a rotary axis to the 3-axis mill to allow the piece being milled to be
rotated around an extra axis, usually the X (this is known as the A axis). An indexed axis just allows the piece to be rotated to set points to allow a further milling pass to then be made, for example, to flip it over to mill the underside; and a fully controllable rotating axis allows the rotation to happen as part of the cutting instructions.

5 axis: This machine adds a second rotary axis—normally around the Y—which is known as the B axis.

6 axis: A third rotary axis—known as the C axis if it rotates around Z—completes the range of movement in this machine. For prototyping work, you’re unlikely to need anything beyond a 3-axis mill, although a fourth axis would give you some extra flexibility. The 5- and 6-axis machines tend to be the larger, more industrial units. As with 3D printing, the software you use for CNC milling is split into two types: CAD (Computer-Aided Design) software lets you design the model. CAM (Computer-Aided Manufacture) software turns that into a suitable tool path—a list of coordinates for the CNC machine to follow which will result in the model being revealed from the block of material. The tool paths are usually expressed in a quasi-standard called G-code. Whilst most of the movement instructions are common across machines, a wide variety exists in the codes for things such as initializing the machine. That said, a number of third-party CAM packages are available, so with luck you will have a choice of which to use.

 

Techniques for writing embedded C code:-

 

Every programming language has its own variety of syntactic and semantic features Some of these have more significance in software written for real-time embedded sys- ---tems than in normal desktop applications Writing software for real-time embedded systems demands a careful understanding of integer data types and of facilities for manipulating bits and bytes This chapter describes in detail how these facilities are implemented, using features of C that are only skimmed over in most introductory courses and usually only mastered after yean of experience. C offers an extensive set of integer data types The reserved words char and integer are the fundamental names for integer data types, with unsigned, signed, short, and long as modifies If one or more modifiers appear in a declaration, the reserved word int is understood and may be omitted. If neither the signed nor the unsigned modifier is used, signed is assumed by default.

Note: - The integer   by itself with no modifier results in a data type whose size is compiler dependent.

DATA TYPE

SIZE

RANGE

Unsigned

CHAR

8

0 to 255

INT

16

0 to 65535

LONG INT

32

0 to 4,294,96,7295

Signed

CHAR

8

-128 to 127

INT

16

-32,768 to 32,767

LONG INT

32

-2,147,483,648 to +2,147,483,647

 

Manipulating Bits AND, OR, XOR and NOT:-

Manipulation of bits is usually implemented via a set of macros or functions that rely on C’s bitwise operators. As shown in below Table, both the bitwise and Boolean operators provide the basic operations of AND, OR, exclusive-OR (XOR), and NOT. However, Boolean operators are used to form conditional expressions (as in an  if/ statement) while the corresponding set of bitwise operators are used to manipulate bits.

 

 

 

 

 

Operation

Boolean Operation

Bitwise Operator

AND

&&

&

OR

||

|

XOR

Unsupported

^

NOT

!

~

 

Most C compilers don’t provide a Boolean data type. Instead, Boolean operators yield results of type in/, with true and false represented by 1 and 0. respectively, Any numeric data type may be used as a Boole an operand. A numeric value to zero is interpreted as false; any nonzero value is interpreted as true.
Bitwise operators work on individual bit positions within the operands; that is, the result in any single bit position is entirely independent of all the other bit positions.
 Bit wise operators treat each operand as an ordered bit vector and produce bit vector results. Boolean operators however, treat each multi bit operand as a single value to be interpreted as either true or false.

Boolean :         (5 || !3) &&  6               => ( true  OR  (not  true))      AND   true
                                                             => (True  or  false)     and      true
                                                             => (true)         and      true
                                                            =>  true
                                                            =>  1
Bit wise :         ( 5 | ~3)&  6                 =>  (00..0101 or  ~00..0011)   and      00..0110
                                                            =>  (00..0101  or   11..1100)   and      00..0110
                                                            =>  (11..1101) and      00..0110
                                                             => 00..0100
                                                            =>  4

Interpreting the Bitwise-AND


m         b          m AND b       

0          0          0          0
0          1          0
 
1          0          0          b
1          1          1


 

Interpreting the Bitwise-OR


m         b          m or b

0          0          0          b
0          1          1
 
1          0          1          1
1          1          1

Interpreting the Bitwise-XOR


m         b          m or b

0          0          0         
0          1          1
 
1          0          1         
1          1          0

interpreting the NOT gate
m         m not
0          1
1          0

Reading and Writing I/O Ports:-

It is a common hardware-design practice for the command and status ports of I/O devices to contain packed information.

UART modem status port (read only)

Carrier

Detection

Ring Indicator

Data set Ready

Clear to Send

Carrier detect

Ring Indicator

Data Set Ready

Clear to send

 

IBM –pc printer control port (write only)

Reserved

Reserved

Reserved

Enable IRQ

Select Printer

Initialize printer

Auto line Feed

Data Strobe

Occasionally, some I/O devices are designed so that they respond to a memory address instead of an I/O port address. The memory address of such devices is fixed,  and thus, the I/O data  must be accessed via a pointer whose  value has been initialized to that address.
 Most I/O devices are designed to respond to an I/O port address and not a memory address. Their data cannot be accessed either directly, via the name of a variable, or indirectly, via a pointer. Calling a special-purpose function is the only way to access such data, since C has no syntax to access I/O ports.
But the situation is even more complex: I/O ports are often either read-only or write-only, and it is also quite common for several I/O ports to be assigned to a single
I/O address.

#include <Reg51.H>
void main (void)
{
unsigned char Port1_value;
// Must set up P1 for reading
P1 = 0xFF;
while(1)
{
// Read the value of P1
Port1_value = P1;
// Copy the value to P2
P2 = Port1_value;
}
}
 In the above example, we have seen how we can copy the value from one port of microcontroller to the other port.

Simple Embedded System Program for LED Blinking:-

In below program we are enabling and disabling the LED’s by interfacing Port 0 of LPC2148. In order to see the clear led blinking we are using some delay and we are using that delay function with the name delay. In order to blink this led’s continuously we are using infinite loop and we are writing that loop by using while ().

#include <LPC214X.H>
void delay()
{
int  i,j;
            for( I =0;i<1000;i++)
            for( j =0;j<1000;j++);             
}
int main()
{
IODIR0|=0xffffffff;
while(1)
{
            delay();                                                                       
            IO0SET|=0xffffffff;
            delay();
            IO0CLR|=0xffffffff;
}
}

Controlling of Motor:-

The maximum current that can be sourced from 8051µC  is 15milli amps at 5v.
But the DC Motor need more current than 8051µC  and it need voltages 6v, 12v, 24v etc, depending upon the type of motor used. 
 Another problem is that the back emf produced by the motor may affect the proper functioning of the microcontroller in real time interfacing. Due to these reasons we can’t connect a DC Motor directly to a microcontroller.
 To overcome these problems you may use a H-Bridge using transistors. Freewheeling diodes or Clamp diodes should be used  to avoid problems due to back emf. Thus it requires transistors, diodes and resistors, which may make our circuit bulky and difficult to assembly.
 To overcome this problem the L293D driver IC is used. It is a Quadruple Half H-Bridge driver and it solves the problem completely.
 no need to connect any transistors, resistors or diodes. We can easily control the switching of L293D using µC . two IC’s in this category L293D and L293. L239D can provide a maximum current of 600mA from 4.5V to 36V while L293 can provide up to 1A under the same input conditions.
L293D contains four Half H Bridge drivers and are enabled in pairs. EN1 is used to enable pair 1 (IN1-OUT1, IN2-OUT2) and EN2 is used to enable pair 2 (IN3-OUT3, IN4-OUT4). We can drive two DC Motors using one L293D, but here we are using only one. You can connect second DC Motor to driver pair 2 according to your needs.

The DC Motor is connected to the first pair of drivers and it is enabled by connecting EN1 to logic HIGH (5V). VSS pin is used to provide logic voltage to L293D. Here 8051 microcontroller, which works at 5v is used to control L293D, hence the logic voltage is 5.The motor supply is given to Vs pin of the L293D.

Control signals and motor status

P2.0/IN1         P2.1/IN2         Motor Status

LOW               LOW               Stop
sLOW             HIGH              Clockwise
HIGH              LOW               Anti-Clockwise
HIGH              HIGH              Stops

 

#include<reg52.h>
#include<stdio.h>
void delay(void);
sbit motor_pin_1 = P2^0;
sbit motor_pin_2 = P2^1;
void main()
 {
 do
 {
 motor_pin_1 = 1;
motor_pin_2 = 0; //Rotates Motor Aniti Clockwise
delay();
motor_pin_1 = 1;
 motor_pin_2 = 1; //Stops
Motor delay();
 motor_pin_1 = 0;
 motor_pin_2 = 1; //Rotates Motor Clockwise
delay();
 motor_pin_1 = 0;
 motor_pin_2 = 0; //Stops Motor
 delay();
 }
while(1);
}
 void delay()
 {
 int i,j;
 for(i=0;i<1000;i++)
 for(j=0;j<1000;j++);
 }

Temperature sensor for Arduino Board:-

The Temperature Sensor LM35 series are precision integrated-circuit temperature devices with an output voltage linearly proportional to the Centigrade temperature.
The LM35 device has an advantage over linear temperature sensors calibrated in Kelvin, as the user is not required to subtract a large constant voltage from the output to obtain convenient Centigrade scaling. The LM35 device does not require any external calibration or trimming to provide typical accuracies of ±¼°C at room temperature and ±¾°C over a full −55°C to 150°C temperature range.

Technical Specifications
Calibrated directly in Celsius (Centigrade)
Linear + 10-mV/°C scale factor
0.5°C ensured accuracy (at 25°C)
Rated for full −55°C to 150°C range
Suitable for remote applications

Components Required
You will need the following components −
1 × Breadboard
1 × Arduino Uno R3
1 × LM35 sensor
 LM35 sensor has three terminals +Vs, Vout and GND.
            Connect the +Vs to +5v on your Arduino board.
            Connect Vout to Analog0 or A0 on Arduino board.
            Connect GND with GND on Arduino.

 

 

 

float temp;
int tempPin = 0;
 void setup()
{
Serial.begin(9600);
}
void loop()
{
 temp = analogRead(tempPin);
 // read analog volt from sensor and save to variable temp
temp = temp * 0.48828125;
// convert the analog volt to its temperature equivalent Serial.print("TEMPERATURE = ");
Serial.print(temp);
// display temperature value
Serial.print("*C");
Serial.println();
delay(1000); // update sensor reading each one second
 }

The temperature display on the serial port monitor which is updated every second.

 

 

 

 

 

 

 

 

UNIT-IV

Cloud Computing:

Cloud computing is a trans-formative computing paradigm that involves delivering applications and services over the Internet Cloud computing involves provisioning of computing, networking and storage resources on demand and providing these resources as metered services to the users, in a “pay as you go” model.  C loud computing resources can be provisioned on demand by the users, without requiring interactions with the cloud service Provider. The process of provisioning resources is automated. Cloud computing resources can be accessed over the network using standard access mechanisms that provide platform independent access through the use of heterogeneous client platforms such as the workstations, laptops, tablets and smart phones.

Cloud computing services are offered to users in different forms:

Software as a Service (SaaS): further to the above, an application layer is provided and managed for you – you won’t see or have to worry about the first two layers.

Infrastructure as a Service (IaaS): hardware is provided by an external provider and managed for you

Platform as a Service (PaaS): in addition to hardware, your operating system layer is managed for you

Infrastructure as a service

Infrastructure as a service (IaaS) is a type of cloud computing that lets you allocate your compute, network, storage and security resources on demand. The IBM approach to IaaS lets you scale and shrink resources as needed around the world in more than 60 data centers.

Get access to the full stack of compute, down to the bare metal. Get more control. Customize hardware to your exact specifications to meet the precise demands of your workload.

Infrastructure-as-a-Service is a cloud-computing offering in which a vendor provides users access to computing resources such as servers, storage and networking. Organizations use their own platforms and applications within a service provider’s infrastructure.

Key features

·         Instead of purchasing hardware outright, users pay for IaaS on demand.

·         Infrastructure is scalable depending on processing and storage needs.

·         Saves enterprises the costs of buying and maintaining their own hardware.

·         Because data is on the cloud, there can be no single point of failure.

·         Enables the virtualization of administrative tasks, freeing up time for other work.

 

Platform as a Service (PaaS): in addition to hardware, your operating system layer is managed for you

PaaS, or Platform-as-a-Service, is a cloud computing model that provides customers a complete platform—hardware, software, and infrastructure—for developing, running, and managing applications without the cost, complexity, and inflexibility of building and maintaining that platform on-premises.

The PaaS provider hosts everything—servers, networks, storage, operating system software, databases—at their data center; the customer uses it all for a for a monthly fee based on usage and can purchase more resources on-demand as needed. In this way, PaaS lets your development teams to build, test, deploy, maintain, update, and scale applications (and to innovate in response to market opportunities and threats) much more quickly and less expensively than they could if you had to build out and manage your own on-premises platform.


Platform as a service (PaaS) is a cloud computing offering that provides users with a cloud environment in which they can develop, manage and deliver applications. In addition to storage and other computing resources, users are able to use a suite of prebuilt tools to develop, customize and test their own applications.

Key features

  • PaaS provides a platform with tools to test, develop and host applications in the same environment.
  • Enables organizations to focus on development without having to worry about underlying infrastructure.
  • Providers manage security, operating systems, server software and backups.
  • Facilitates collaborative work even if teams work remotely.

Software as a Service (SaaS): further to the above, an application layer is provided and managed for you – you won’t see or have to worry about the first two layers.

Software as a service — simply defined as cloud-based applications accessed through the web or an API — helps you stay ahead of your competition. Gain cognitive analytics, innovative business processes and better customer experiences with ready-to-use SaaS apps that deploy rapidly and with minimal impact on IT resources. From human resources and marketing to finance, IT, and many other roles, SaaS apps help you move faster. IBM Cloud™ SaaS apps are scalable around the world and are designed to be secure so your data stays safe.

SaaS  is a cloud computing offering that provides users with access to a vendor’s cloud-based software. Users do not install applications on their local devices. Instead, the applications reside on a remote cloud network accessed through the web or an API. Through the application, users can store and analyze data and collaborate on projects.

Key features

  • SaaS vendors provide users with software and applications via a subscription model.
  • Users do not have to manage, install or upgrade software; SaaS providers manage this.
  • Data is secure in the cloud; equipment failure does not result in loss of data.
  • Use of resources can be scaled depending on service needs.
  • Applications are accessible from almost any internet-connected device, from virtually anywhere in the world.

Communication API

Communication APIs are APIs that give businesses the ability to embed voice calling, text messaging and other communications functionality into a software application or product. From a developer standpoint, APIs are important because they allow capabilities of a specific program to be used interchangeably with another, meaning these programs are able to communicate. 

Communication APIs are built for the communication space, and were brought on after the development and deployment of Communications-Platform-as-a-Service (CPaaS), allowing enterprises to achieve seamless communications across a plethora of different channels. Built to serve as a type of liaison between an application and a database, Communication APIs help businesses leverage data securely and with ease. 

Types of Communication APIs

There are many different types of APIs that can be integrated into software, even existing software, so businesses can adopt as many as they want with ease. Some of the most common types of communication APIs include:

·         SMS APIs

o    SMS APIs let businesses easily integrate SMS text functions into an application and offer a variety of features that allow customers to access emojis, pictures, audio, long message support, PIN codes, notifications and more.  

·         MMS APIs

o    MMS APIs let businesses easily integrate MMS text functions into an application and offer customers the ability to send video or picture messages, and can even be used for group messaging.

·         Voice Calling APIs

o    Voice Calling APIs let businesses easily integrate voice functions into an application, including automated and controlled call routing, conference calling, call recording, text-to-speech and more. 

·         REST APIs

o    REST APIs are utilized to send and receive phone calls and text messages, along with managing phone number operations. 

·         Emergency Calling APIs

o    Emergency Calling APIs let businesses empower their users to contact emergency services how and where they want, giving them peace of mind and preparedness for when disaster strikes. 

Why Are Businesses Adopting Communication APIs?

Communication APIs provide businesses with the cutting-edge platforms and programs needed to stay up to date in our vastly growing world of technology. CPaaS, in general, is quickly evolving into the enterprise space for businesses who want to take control of their communication platforms. Communication APIs are a cost-effective way of allowing extensive communications internally and externally, increasing productivity, efficiency, and collaboration, while keeping integration pains at a minimum.

Where Can I Get Communication APIs?

Finding APIs can be a small challenge, because not all APIs are created equal. Help from tools such as API directories are preferred, because they market APIs from the providers site, and help developers see which will be the best match for their system. If you are starting from scratch with building out your businesses APIs, you can find directories, like that of ProgrammableWeb, and choose which provider you think would best fit your needs. If you have a provider already, integrating new APIs will not be a challenge, just contact your current support team.

What Consumers Need to Know About Communication APIs

Communication APIs are moving businesses forward in markets they never thought possible. Due to APIs allowing computer programs to interchange capabilities and communicate with one another, businesses are able to grow like never before. Communication APIs are driving new waves of innovation and transforming business processes. 

APIs became more popular and development took off after the release of the iPhone, and other smart devices with soon followed. This meant that businesses needed to allow their end users the ability to easily access information and data through the applications, not just via the internet. 

How is Bandwidth Involved with Communication APIs

Bandwidth owns and operates one of the largest All-IP Voice Networks in the nation. On top of that network, we have built a full suite of communication APIs to fulfill the needs of all of our customers, large and small. Bandwidth provides services that  enable voice, messaging, 911 access, and phone numbers to be seamlessly integrated to work for our customers. Our Communications APIs are used to meet a wide range of business needs. From developers integrating voice and messaging capabilities into applications to call centers finding ways to better manage their calling infrastructure, Bandwidth’s communication APIs can create solutions for you. Try them free and see for yourself.

Bandwidth has mastered communication software so that you can master what matters most to you. Since we own and operate one of the largest All-IP Voice Networks in the nation, we are built to scale, aiming to grow with you while you grow your business. Integrating voice calling, text messaging, and other communication features allows you to do more for your customers.

What are the Benefits Bandwidth’s Communication APIs

The Bandwidth API Platform makes using APIs even easier. Our fully-featured user interface lets anyone with web development skills create voice and messaging applications, and our platform includes a load of helpful how-to guides, sample code, and API help libraries to get you started.

Bandwidth makes it easy to get started with our APIs. Your own web developers can do the job, even if they know nothing about telecom. Incorporating voice and messaging is a lot like writing a web application. Plus, there’s no costly capital investment in hardware or specialized telecom equipment, so there’s no need to manage assets, patches, security, or anything else. You can leverage your telecom provider’s cloud hosting and hardware expertise.

Overall, your customers will love the convenience and personalization when you use automated messages to remind them about appointments or service updates. And when you integrate texting and voice calls, you will be free to focus more on customers and less on data entry, so you can increase your value gains and productivity at the same time.

What’s an API?

An API can be described as a set of programming standards and instructions for web-based applications and tools. Whenever a leading software company releases an API to the public, it empowers developers to design products that are powered by its service.

An API can also be described as a software to software interface that doesn’t demand human intervention. Unlike front-end facing user interfaces, APIs function behind the scenes in the backend and are unseen.

APIs enable seamless integration with multiple tools. So the end-user will never notice software functions being handed over from one platform to another.

What are Communications APIs?

Communication APIs, like the term sounds, are APIs built for the communications space. They establish a standardized syntax and methods of communication.

In other words, Communication APIs define rules of what interactions are possible between servers and communication applications. They also function as the communication layer between applications and databases.

This approach helps enterprises manipulate data securely and quickly. It also dictates how data should be formatted for rapid exchange.

For example, you can integrate Communication APIs into existing enterprise software. For the end-user, this means having complete control over multiple communications tools in one solution.

Enterprises that adopt this approach will benefit from enhanced productivity, streamlined workflows, and the ability to keep improving operations by leveraging data and analytics.

Some of the Communication APIs available today are as follows:

·         SMS API

·         Two-Way SMS API

·         SIP Trunking

·         Number Discovery

SMS API

SMS APIs can be integrated with several different types of software. For example, you can embed it in your marketing platform to send large volumes of text messages to millions of users in the MEA region.

You can also track the success of your campaigns by connecting it to your big data and analytics application. This approach can be scaled up or down depending on present needs, cost-effectively.

Key advantages of implementing SMS APIs:

·          Broadcast messages in the user’s preferred language

·         Ensure that messages were delivered on intelligent primary routs

·         Leverage live Home, Location, Register (HLR) lookup

·         Verify the quality of delivery (of your messages)

Two-Way SMS API

For years, when businesses sent a text message to an individual, it was usually restricted to one-way communication (from the company to the person). From a marketing perspective, this is not always the most effective method.

Two-Way SMS breaks down this barrier and helps brands engage in SMS conversations with their customers in real-time. To achieve this, companies have to obtain a long code or short code phone number that can be dedicated exclusively for this activity.

Two-Way SMS APIs can also be integrated by medical practices within their appointment scheduling software. This approach will help staff SMS patients to confirm appointments. Whenever they respond stating that they can’t keep an appointment, the system can automatically message them back with a list of alternate dates and times.

Like SMS APIs, Two-Way SMS APIs also come with the benefit of broadcasting large volumes of text messages in the customer’s preferred language, the ability to scale up or down, and verification of the quality of delivery.

However, Two-Way SMS can do so much more. For example, you can create dynamic automated responses that can be triggered by a predefined set of keywords. You can also engage in opt-in tracking and leverage delivery reports to ascertain the success of each campaign.

 

Key benefits of Two-Way SMS APIs:

·         Easy access to customers who have opted to receive SMS

·         Low cost with high conversion and ROI

·         Immediate direct message delivery

·         Seamless integration with CRM platforms

·         Schedule and automate marketing campaigns

SIP Trunking API

Sip Trunk can be described as a virtual phone line which enables businesses to connect with their customers around the world, cost-effectively. With this approach, you’ll only pay for the voice connectivity you have used.

It’s a lot cheaper because there aren’t any physical lines involved that need to be maintained. As a result, it’s considerably less expensive than your traditional phone service.

Key benefits of deploying SIP trunking:

·         Call forwarding within the building or to international offices

·         Call recording to ensure quality control

·         High performance

·         Location-based call routing to ensure high-quality communication

·         Number-masking for customer protection

·         OTP fallback guaranteed backup channel for SMSs

·         Real-time quality monitoring

·         Number Discovery API

Before the emergence of Number Discover APIs, companies had to expend a lot of time and resources to validate mobile numbers. Today, Number Discovery APIs that are supported by direct SS7 partnerships can be used to reduce the number of undelivered messages.

It helps businesses manage their marketing budgets effectively by checking the validity and operational status of mobile numbers before connecting customers. When the numbers are verified, texts can be delivered rapidly, securely, and reliably.

When every text is delivered with little to no-delay, companies can also ensure that they aren’t wasting their promotional resources.

Key benefits of integrating a Number Discovery API:

·         Automated database cleaning

·         Boost delivery rates

·         Cut Costs

·         Enhanced quality

·         Fraud prevention

Why Do Businesses Need Them?

Enterprises engage in extensive internal and external communication. Solutions like SIP trunking help keep costs down. The same applies to Two-Way SMSs and Number Discovery APIs as they enable companies to engage in marketing activities without dissipating their promotional budgets.

All these Communication APIs can be integrated into a single existing solution, cost-effectively. When businesses adopt this approach, they can enjoy improved productivity, streamlined workflows, enhanced collaboration, and more.

Unlike traditional communications solutions, CPaaS is quickly becoming the go-to technology for companies that want to take a DIY approach to communication and collaboration.

With the abundance of communication APIs, it allows end-users to mix and match features to a single solution and effectively meet their business demands.

Amazon webservices for IoT

·         AWS IoT Core

·         Amazon FreeRTOS

·         AWS IoT Greengrass

·         AWS IoT 1-Click

·         AWS IoT Analytics

·         AWS IoT Button

·         AWS IoT Device Defender

·         AWS IoT Device Management

·         AWS IoT Events

·         AWS IoT SiteWise

·         AWS IoT Things Graph

·         AWS Partner Device Catalog

AWS IoT Core

AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices. AWS IoT Core can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely. With AWS IoT Core, your applications can keep track of and communicate with all your devices, all the time, even when they aren’t connected.

AWS IoT Core makes it easy to use AWS services like AWS Lambda, Amazon Kinesis, Amazon S3, Amazon SageMaker, Amazon DynamoDB, Amazon CloudWatch, AWS CloudTrail, and Amazon QuickSight to build Internet of Things (IoT) applications that gather, process, analyze and act on data generated by connected devices, without having to manage any infrastructure.

Amazon FreeRTOS

Amazon FreeRTOS (a:FreeRTOS) is an operating system for microcontrollers that makes small, low-power edge devices easy to program, deploy, secure, connect, and manage. Amazon FreeRTOS extends the FreeRTOS kernel, a popular open source operating system for microcontrollers, with software libraries that make it easy to securely connect your small, low-power devices to AWS cloud services like AWS IoT Core or to more powerful edge devices running AWS IoT Greengrass.

A microcontroller (MCU) is a single chip containing a simple processor that can be found in many devices, including appliances, sensors, fitness trackers, industrial automation, and automobiles. Many of these small devices could benefit from connecting to the cloud or locally to other devices. For example, smart electricity meters need to connect to the cloud to report on usage, and building security systems need to communicate locally so that a door will unlock when you badge in. Microcontrollers have limited compute power and memory capacity and typically perform simple, functional tasks. Microcontrollers frequently run operating systems that do not have built-in functionality to connect to local networks or the cloud, making IoT applications a challenge. Amazon FreeRTOS helps solve this problem by providing both the core operating system (to run the edge device) as well as software libraries that make it easy to securely connect to the cloud (or other edge devices) so you can collect data from them for IoT applications and take action.

AWS IoT Greengrass

AWS IoT Greengrass seamlessly extends AWS to devices so they can act locally on the data they generate, while still using the cloud for management, analytics, and durable storage. With AWS IoT Greengrass, connected devices can run AWS Lambda functions, execute predictions based on machine learning models, keep device data in sync, and communicate with other devices securely – even when not connected to the Internet.

With AWS IoT Greengrass, you can use familiar languages and programming models to create and test your device software in the cloud, and then deploy it to your devices. AWS IoT Greengrass can be programmed to filter device data and only transmit necessary information back to the cloud. You can also connect to third-party applications, on-premises software, and AWS services out-of-the-box with AWS IoT Greengrass Connectors. Connectors also jumpstart device onboarding with pre-built protocol adapter integrations and allow you to streamline authentication via integration with AWS Secrets Manager.

AWS IoT 1-Click

AWS IoT 1-Click is a service that enables simple devices to trigger AWS Lambda functions that can execute an action. AWS IoT 1-Click supported devices enable you to easily perform actions such as notifying technical support, tracking assets, and replenishing goods or services. AWS IoT 1-Click supported devices are ready for use right out of the box and eliminate the need for writing your own firmware or configuring them for secure connectivity. AWS IoT 1-Click supported devices can be easily managed. You can easily create device groups and associate them with a Lambda function that executes your desired action when triggered. You can also track device health and activity with the pre-built reports.

AWS IoT Analytics

AWS IoT Analytics is a fully-managed service that makes it easy to run and operationalize sophisticated analytics on massive volumes of IoT data without having to worry about the cost and complexity typically required to build an IoT analytics platform. It is the easiest way to run analytics on IoT data and get insights to make better and more accurate decisions for IoT applications and machine learning use cases.

IoT data is highly unstructured which makes it difficult to analyze with traditional analytics and business intelligence tools that are designed to process structured data. IoT data comes from devices that often record fairly noisy processes (such as temperature, motion, or sound). The data from these devices can frequently have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of additional, third party data inputs. For example, to help farmers determine when to water their crops, vineyard irrigation systems often enrich moisture sensor data with rainfall data from the vineyard, allowing for more efficient water usage while maximizing harvest yield.

AWS IoT Analytics automates each of the difficult steps that are required to analyze data from IoT devices. AWS IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can setup the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing the processed data. Then, you can analyze your data by running ad hoc or scheduled queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. AWS IoT Analytics makes it easy to get started with machine learning by including pre-built models for common IoT use cases.

You can also use your own custom analysis, packaged in a container, to execute on AWS IoT Analytics. AWS IoT Analytics automates the execution of your custom analyses created in Jupyter Notebook or your own tools (such as Matlab, Octave, etc.) to be executed on your schedule.

AWS IoT Analytics is a fully managed service that operationalizes analyses and scales automatically to support up to petabytes of IoT data. With AWS IoT Analytics, you can analyze data from millions of devices and build fast, responsive IoT applications without managing hardware or infrastructure.

AWS IoT Button

The AWS IoT Button is a programmable button based on the Amazon Dash Button hardware. This simple Wi-Fi device is easy to configure, and it’s designed for developers to get started with AWS IoT Core, AWS Lambda, Amazon DynamoDB, Amazon SNS, and many other Amazon Web Services without writing device-specific code.

You can code the button's logic in the cloud to configure button clicks to count or track items, call or alert someone, start or stop something, order services, or even provide feedback. For example, you can click the button to unlock or start a car, open your garage door, call a cab, call your spouse or a customer service representative, track the use of common household chores, medications or products, or remotely control your home appliances.

The button can be used as a remote control for Netflix, a switch for your Philips Hue light bulb, a check-in/check-out device for Airbnb guests, or a way to order your favorite pizza for delivery. You can integrate it with third-party APIs like Twitter, Facebook, Twilio, Slack or even your own company's applications. Connect it to things we haven’t even thought of yet.

AWS IoT Device Defender

AWS IoT Device Defender is a fully managed service that helps you secure your fleet of IoT devices. AWS IoT Device Defender continuously audits your IoT configurations to make sure that they aren’t deviating from security best practices. A configuration is a set of technical controls you set to help keep information secure when devices are communicating with each other and the cloud. AWS IoT Device Defender makes it easy to maintain and enforce IoT configurations, such as ensuring device identity, authenticating and authorizing devices, and encrypting device data. AWS IoT Device Defender continuously audits the IoT configurations on your devices against a set of predefined security best practices. AWS IoT Device Defender sends an alert if there are any gaps in your IoT configuration that might create a security risk, such as identity certificates being shared across multiple devices or a device with a revoked identity certificate trying to connect to AWS IoT Core.

AWS IoT Device Defender also lets you continuously monitor security metrics from devices and AWS IoT Core for deviations from what you have defined as appropriate behavior for each device. If something doesn’t look right, AWS IoT Device Defender sends out an alert so you can take action to remediate the issue. For example, traffic spikes in outbound traffic might indicate that a device is participating in a DDoS attack. AWS IoT Greengrass and Amazon FreeRTOS automatically integrate with AWS IoT Device Defender to provide security metrics from the devices for evaluation.

AWS IoT Device Defender can send alerts to the AWS IoT Console, Amazon CloudWatch, and Amazon SNS. If you determine that you need to take an action based on an alert, you can use AWS IoT Device Management to take mitigating actions such as pushing security fixes.

AWS IoT Device Management

As many IoT deployments consist of hundreds of thousands to millions of devices, it is essential to track, monitor, and manage connected device fleets. You need to ensure your IoT devices work properly and securely after they have been deployed. You also need to secure access to your devices, monitor health, detect and remotely troubleshoot problems, and manage software and firmware updates.

AWS IoT Device Management makes it easy to securely onboard, organize, monitor, and remotely manage IoT devices at scale. With AWS IoT Device Management, you can register your connected devices individually or in bulk, and easily manage permissions so that devices remain secure. You can also organize your devices, monitor and troubleshoot device functionality, query the state of any IoT device in your fleet, and send firmware updates over-the-air (OTA). AWS IoT Device Management is agnostic to device type and OS, so you can manage devices from constrained microcontrollers to connected cars all with the same service. AWS IoT Device Management allows you to scale your fleets and reduce the cost and effort of managing large and diverse IoT device deployments.

AWS IoT Events

AWS IoT Events is a fully managed IoT service that makes it easy to detect and respond to events from IoT sensors and applications. Events are patterns of data identifying more complicated circumstances than expected, such as changes in equipment when a belt is stuck or connected motion detectors using movement signals to activate lights and security cameras. To detect events before AWS IoT Events, you had to build costly, custom applications to collect data, apply decision logic to detect an event, and then trigger another application to react to the event. Using AWS IoT Events, it’s simple to detect events across thousands of IoT sensors sending different telemetry data, such as temperature from a freezer, humidity from respiratory equipment, and belt speed on a motor, and hundreds of equipment management applications. You simply select the relevant data sources to ingest, define the logic for each event using simple ‘if-then-else’ statements, and select the alert or custom action to trigger when an event occurs. AWS IoT Events continuously monitors data from multiple IoT sensors and applications, and it integrates with other services, such as AWS IoT Core and AWS IoT Analytics, to enable early detection and unique insights into events. AWS IoT Events automatically triggers alerts and actions in response to events based on the logic you define. This helps resolve issues quickly, reduce maintenance costs, and increase operational efficiency.

AWS IoT SiteWise

AWS IoT SiteWise is a managed service that makes it easy to collect and organize data from industrial equipment at scale. You can easily monitor equipment across your industrial facilities to identify waste, such as breakdown of equipment and processes, production inefficiencies, and defects in products. Today, getting performance metrics from industrial equipment is tough because data is often locked into proprietary on-premises data stores and typically requires specialized expertise to retrieve it and put it in a format that is useful for searching and analysis. IoT SiteWise simplifies this process by providing software running on a gateway that resides in your facilities and automates the process of collecting and organizing industrial equipment data. This gateway securely connects to your on-premises data servers, collects data, and sends the data to the AWS Cloud. You can run the IoT SiteWise software on an AWS Snowball Edge gateway or install the IoT SiteWise software on popular third-party industrial gateways. These gateways are specifically designed for industrial environments that are likely already in your facilities connecting your industrial equipment.

You can use IoT SiteWise to monitor operations across facilities, quickly compute common industrial performance metrics, and build applications to analyze industrial equipment data, prevent costly equipment issues, and reduce production inefficiencies. With IoT SiteWise, you can focus on understanding and optimizing your operations, rather than building costly in-house data collection and management applications.

AWS IoT Things Graph

AWS IoT Things Graph is a service that makes it easy to visually connect different devices and web services to build IoT applications.

IoT applications are being built today using a variety of devices and web services to automate tasks for a wide range of use cases, such as smart homes, industrial automation, and energy management. Because there aren't any widely adopted standards, it's difficult today for developers to get devices from multiple manufacturers to connect to each other as well as with web services. This forces developers to write lots of code to wire together all of the devices and web services they need for their IoT application. AWS IoT Things Graph provides a visual drag-and-drop interface for connecting and coordinating devices and web services, so you can build IoT applications quickly. For example, in a commercial agriculture application, you can define interactions between humidity, temperature, and sprinkler sensors with weather data services in the cloud to automate watering. You represent devices and services using pre-built reusable components, called models, that hide low-level details, such as protocols and interfaces, and are easy to integrate to create sophisticated workflows.

You can get started with AWS IoT Things Graph using these pre-built models for popular device types, such as switches and programmable logic controllers (PLCs), or create your own custom model using a GraphQL-based schema modeling language, and deploy your IoT application to AWS IoT Greengrass-enabled devices such as cameras, cable set-top boxes, or robotic arms in just a few clicks. IoT Greengrass is software that provides local compute and secure cloud connectivity so devices can respond quickly to local events even without internet connectivity, and runs on a huge range of devices from a Raspberry Pi to a server-level appliance. IoT Things Graph applications run on IoT Greengrass-enabled devices.

AWS Partner Device Catalog

The AWS Partner Device Catalog helps you find devices and hardware to help you explore, build, and go to market with your IoT solutions. Search for and find hardware that works with AWS, including development kits and embedded systems to build new devices, as well as off-the-shelf-devices such as gateways, edge servers, sensors, and cameras for immediate IoT project integration. The choice of AWS enabled hardware from our curated catalog of devices from APN partners can help make the rollout of your IoT projects easier. All devices listed in the AWS Partner Device Catalog are also available for purchase from our partners to get you started quickly.

SKYNET IOT MESSAGING PLATFORM

Skynet, not to be confused with the Artificial Intelligence company that Google bought last month, is a Cloud-based MQTT-powered network that scales to meet any needs whether the nodes are smart home devices, sensors, cloud resources, drones, Arduinos, Raspberry Pis, among others. It is powered by Node.JS,  known for fast, event-driven operations, ideal for nodes and devices such as RaspberryPi, Arduino, and Tessel.

When nodes and devices register with Skynet, they are assigned a unique id known as a UUID along with a security token. Upon connecting your node or device to Skynet, you can query and update devices on the network and send machine-to-machine (M2M) messages in an RPC-style fashion. Essentially, real time M2M communication is what Skynet aims for.

Skynet offers a realtime websocket API as well as a Node.JS NPM module to make event-driven IoT development fast and easy. When nodes and devices register with Skynet, they are assigned a unique id known as a UUID along with a security token. (Note: tokens can be provided by the request rather than having Skynet assign one for you.)
Upon connecting your node or device to Skynet, you can query and update devices on the network and send machine-to-machine (M2M) messages in an RPC-style fashion.

Drones Get A Messaging Network Aptly Called Skynet

For the past nine months a company called Skynet has been working on a machine to machine (M2M) system for drones to talk to each other. Named after the self-aware artificial intelligence system in the Terminator movie series, Skynet is designed to run on a single network or mesh of IoT networks that share a common API or communications protocol.  In this scenario, devices can discover, query, and message other devices on the network.

CEO Chris Matthieu was one of the early developers of voice APIs and apps. With SkyNet, he has turned his attention to drones. He detailed the technology for us recentlty to show the capabilities that drones have and the new stacks people are building to make the machines increasingly sophisticated as illustrated in this video Matthieu shot.

SkyNet is running on a dozen Amazon EC2 servers and has nearly 50,000 registered smart devices including: ArduinosSparksRaspberryPisIntelGalileos, and BeagleBoards, Matthieu said. SkyNet runs as an IoT platform-as-a-service (PaaS) as well as a private cloud through Docker, the new lightweight container technology. The platform is written in Node.js and released under an MIT open source license on GitHub.

The single SkyNet API supports the following IoT protocols: HTTP, REST, WebSockets, MQTT (Message Queue Telemetry Transport), and CoAP (Constrained Application Protocol) for guaranteed message delivery and low-bandwidth satellite communications, Matthieu said. Every connected device is assigned a 36 character UUID and secret token that act as the device’s strong credentials. Security permissions can be assigned to allow device discoverability, configuration, and messaging.

The company manages a directory service for querying devices that meet search criteria such as “all online drones in san francisco,” Matthieu said. An array of UUIDs are returned meeting the search criteria allowing the ability to message one or all of these UUIDs with instructions. Presence (online/offline) of each connected device is managed by realtime WebSocket communications. MQTT allows SkyNet to message devices when they reconnect from being offline.

SkyNet recently released its IoT Hub which allows the user to connect smart devices with and without IP addresses directly to SkyNet including: Nest, Phillips Hue lightbulbs, Belkin Wemos, Insteons, and other not-so-smart devices such as serial port devices and RF (radio frequency) devices.  Not only does this allow any device to be connected to the Internet but it also allows people to message smart devices without going through the manufacturers’ clouds and apps.  The smart device Hub plug-ins are Node.JS NPM modules making them easy to share, extend, and deploy.

The company also recently released a SkyNet operating system, Matthieu said. It turns any Arduino-compatible device (Arduino, Spark, Pinoccio, etc) into a messaging capable hardware device on the Internet. When the Arduino boots up, it uses its built-in ethernet jack or wifi chip (or ethernet/wifi shield) to connect and authenticate with SkyNet — no CPUs are required to control the device. With built-in firmata and a SkyNet message, a person can turn on and off Arduino pins (including LEDs, servos, motors, power relays, etc.) and read from pins connected to sensors.

You could duct tape one of these devices to a light pole with a small solar panel and rechargeable batteries. It could smart-enable your city block.

NodeRed, a visual tool for wiring the IoT, is now connected to SkyNet and can control a network of connected smart devices with a drag and drop designer.

A Talking, Flying Network

There really is no way to personally manage all of the connected devices in the world. Devices will have to make their own decisions that will connect through mesh networks.

What comes of this new network will depend in many respects on the openness of the things in the world so they can interoperate. That means a much different network than what today’s cloud services offer. More so the drones and things of the world will depend on each other through communication — much like us humans do today. The first thing to understand about analytics on IoT data is that it involves datasets generated by sensors, which are now both cheap and sophisticated enough to support a seemingly endless variety of use cases. The potential of sensors lies in their ability to gather data about the physical environment, which can then be analyzed or combined with other forms of data to detect patterns.

Introduction to Data Analytics for IoT

IoT analytics is the application of data analysis tools and procedures to realize value from the huge volumes of data generated by connected Internet of Things devices. The potential of IoT analytics is often discussed in relation to the Industrial IoT. The IIoT makes it possible for organizations to collect and analyze data from sensors on manufacturing equipment, pipelines, weather stations, smart meters, delivery trucks and other types of machinery. IoT analytics offers similar benefits for the management of data centers and other facilities, as well as retail and healthcare applications.

IoT data can be thought of as a subset and a special case of big data and, as such, consists of heterogenous streams that must be combined and transformed to yield consistent, comprehensive, current and correct information for business reporting and analysis. Data integration is complex for IoT data. There are many types of devices, most of which are not designed for compatibility with other systems. Data integration and the analytics that rely on it are two of the biggest challenges to IoT development. 

Big data is sometimes characterized by the 3Vs model: Volume, variety and velocity. Volume refers to the amount of data, variety refers to the number of different types of data and devices, and velocity refers to the speed of data processing. The challenges of big data analytics – and IoT analytics -- result from the simultaneous expansion of all three properties, rather than just the volume alone.

Apache Hadoop is an open source software framework for storage and large scale processing of data-sets on clusters of commodity hardware. Hadoop is an Apache top-level project being built and used by a global community of contributors and users. It is licensed under the Apache License 2.0. 

The Apache Hadoop framework is composed of the following modules

Hadoop Common: contains libraries and utilities needed by other Hadoop modules

Hadoop Distributed File System (HDFS): a distributed file-system that stores data on the commodity machines, providing very high aggregate bandwidth across the cluster

Hadoop YARN: a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications

Hadoop MapReduce: a programming model for large scale data processing

All the modules in Hadoop are designed with a fundamental assumption that hardware failures (of individual machines, or racks of machines) are common and thus should be automatically handled in software by the framework. Apache Hadoop's MapReduce and HDFS components originally derived respectively from Google's MapReduce and Google File System (GFS) papers.

 

Beyond HDFS, YARN and MapReduce, the entire Apache Hadoop "platform" is now commonly considered to consist of a number of related projects as well: Apache Pig, Apache Hive, Apache HBase, and others.

 

For the end-users, though MapReduce Java code is common, any programming language can be used with "Hadoop Streaming" to implement the "map" and "reduce" parts of the user's program. Apache Pig and Apache Hive, among other related projects, expose higher level user interfaces like Pig latin and a SQL variant respectively. The Hadoop framework itself is mostly written in the Java programming language, with some native code in C and command line utilities written as shell-scripts.

HDFS and MapReduce

There are two primary components at the core of Apache Hadoop 1.x: the Hadoop Distributed File System (HDFS) and the MapReduce parallel processing framework. These are both open source projects, inspired by technologies created inside Google.

Hadoop distributed file system

The Hadoop distributed file system (HDFS) is a distributed, scalable, and portable file-system written in Java for the Hadoop framework. Each node in a Hadoop instance typically has a single namenode, and a cluster of datanodes form the HDFS cluster. The situation is typical because each node does not require a datanode to be present. Each datanode serves up blocks of data over the network using a block protocol specific to HDFS. The file system uses the TCP/IP layer for communication. Clients use Remote procedure call (RPC) to communicate between each other.

 

 

HDFS stores large files (typically in the range of gigabytes to terabytes) across multiple machines. It achieves reliability by replicating the data across multiple hosts, and hence does not require RAID storage on hosts. With the default replication value, 3, data is stored on three nodes: two on the same rack, and one on a different rack. Data nodes can talk to each other to rebalance data, to move copies around, and to keep the replication of data high. HDFS is not fully POSIX-compliant, because the requirements for a POSIX file-system differ from the target goals for a Hadoop application. The tradeoff of not having a fully POSIX-compliant file-system is increased performance for data throughput and support for non-POSIX operations such as Append.

HDFS added the high-availability capabilities for release 2.x, allowing the main metadata server (the NameNode) to be failed over manually to a backup in the event of failure, automatic fail-over.

The HDFS file system includes a so-called secondary namenode, which misleads some people into thinking that when the primary namenode goes offline, the secondary namenode takes over. In fact, the secondary namenode regularly connects with the primary namenode and builds snapshots of the primary namenode's directory information, which the system then saves to local or remote directories. These checkpointed images can be used to restart a failed primary namenode without having to replay the entire journal of file-system actions, then to edit the log to create an up-to-date directory structure. Because the namenode is the single point for storage and management of metadata, it can become a bottleneck for supporting a huge number of files, especially a large number of small files. HDFS Federation, a new addition, aims to tackle this problem to a certain extent by allowing multiple name-spaces served by separate namenodes.

An advantage of using HDFS is data awareness between the job tracker and task tracker. The job tracker schedules map or reduce jobs to task trackers with an awareness of the data location. For example, if node A contains data (x, y, z) and node B contains data (a, b, c), the job tracker schedules node B to perform map or reduce tasks on (a,b,c) and node A would be scheduled to perform map or reduce tasks on (x,y,z). This reduces the amount of traffic that goes over the network and prevents unnecessary data transfer. When Hadoop is used with other file systems, this advantage is not always available. This can have a significant impact on job-completion times, which has been demonstrated when running data-intensive jobs. HDFS was designed for mostly immutable files and may not be suitable for systems requiring concurrent write-operations.

Another limitation of HDFS is that it cannot be mounted directly by an existing operating system. Getting data into and out of the HDFS file system, an action that often needs to be performed before and after executing a job, can be inconvenient. A filesystem in Userspace (FUSE) virtual file system has been developed to address this problem, at least for Linux and some other Unix systems.

File access can be achieved through the native Java API, the Thrift API, to generate a client in the language of the users' choosing (C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, or OCaml), the command-line interface, or browsed through the HDFS-UI web app over HTTP.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

UNIT-V

IoT Product Manufacturing – From prototype to reality:-

THE LONG TAIL OF THE INTERNET:-

As we have seen, huge changes in business practice are usually facilitated by, or brought about as a consequence of, technological change. One of the greatest technological paradigm shifts in the twentieth century was the Internet. From Tim Berners-Lee’s first demonstration of the World Wide Web in 1990, it took only five years for eBay and Amazon to open up Shop and emerge another five years later as not only survivors but victors of the dot-com bubble. Both companies changed the way we buy and sell things. Chris Anderson of Wired magazine coined and popularized the phrase “long tail” to explain the mechanism behind the shift.

A physical bricks & mortar shop has to pay rent and maintain inventory, all of which takes valuable space in the shop; therefore, it concentrates on providing what will sell to the customers who frequent it: the most popular goods, the “hits”, or the Short Head. In comparison, an Internet storefront exposes only bits, which are effectively free. Of course, Amazon has to maintain warehouses and stock, but these can be much more efficiently

managed than a public-facing shop. Therefore, it can ship vastly greater numbers of products, some of which may be less popular but still sell in huge quantities when all the sales are totalled across all the products.

Whereas a specialist shop in Liverpool; Springfield, Oregon; or Florence, Italy, may or may not find enough customers to make its niche sustainable, depending on the town’s size and cultural diversity, on the Internet all niches can find a market. Long tail Internet giants help this process by aggregating products from smaller providers, as with Amazon Marketplace or eBay’s sellers. This approach helps thousands of small third-party traders exist, but also makes money for the aggregator, who don’t have to handle the inventory or delivery at all, having outsourced it to the long tail.

E-books and print-on-demand are also changing the face of publishing with a far wider variety of available material and a knock-on change in the business models of writers and publishers that is still playing out today. Newer business models have been created and already disrupted, as when Google overturned the world of search engines, which hadn’t even existed a decade previously. Yet although Google’s stated goal is “to organize the world’s information and make it universally accessible and useful” (www.google.com/about/company/), it makes money primarily through exploiting the long tail of advertising, making it easy for small producers to advertise effectively alongside giant corporations.

LEARNING FROM HISTORY

We’ve seen some highlights of business models over the sweep of human history, but what have we learnt that we could apply to an Internet of Things project that we want to turn into a viable and profitable business? First, we’ve seen that some models are ancient, such as Make Thing Then Sell It. The way you make it or the way you sell it may change, but the basic

principle has held for millennia. Second, we’ve seen how new technologies have inspired new business models. We haven’t yet exhausted all the new types of business facilitated by

the Internet and the World Wide Web.… If our belief that the Internet of Things will represent a similar sea change in technology is true, it will be accompanied by new business models we can barely conceive of today. Third, although there are recurring patterns and common models, there are countless variations. Subtle changes to a single factor, such as the

manufacturing process or the way you pay for a product or resource can have a knock-on effect on your whole business. Finally, new business models have the power to change the world, like the way branded soap ushered in mass consumerism and mass production changed the notion of work itself.

THE BUSINESS MODEL CANVAS

 

One of the most popular templates for working on a business model is the Business Model Canvas by Alexander Osterwalder and his startup, the Business Model Foundry. The canvas is a Creative Commons–licensed single-page planner.

 

 

At first sight, it looks as though each box is simply an element in a form and the whole thing could be replaced by a nine-point checklist. However, the boxes are designed to be a good size for sticky notes, emphasizing that you can play with the ideas you have and move them around. Also the layout gives a meaning and context to each item.

Let’s look at the model, starting with the most obvious elements and then drilling down into the grittier details that we might neglect without this kind of template.

At the bottom right, we have Revenue Streams, which is more or less the question of “how are you going to make money?” we used to start this chapter. Although its position suggests that it is indeed one of the important desired outputs of the business, it is by no means the only consideration!

The central box, Value Propositions, is, in plainer terms, what you will be producing—that is, your Internet of Things product, service, or platform.

The Customer Segments are the people you plan to deliver the product to. That might be other makers and geeks (if you are producing a kit form device), the general public, families, businesses, or 43-year-old accountants (famously, the average customer of Harley-Davidson).

The Customer Relationships might involve a lasting communication between the company and its most passionate customers via social media. This position could convey an advantage but may be costly to maintain. Maintaining a “community” of your customers may be beneficial, but which relationships will you priorities to keep communicating with your most

valuable customer segments?

 

Channels are ways of reaching the customer segments. From advertising and distributing your product, to delivery and after-sales, the channels you choose have to be relevant to your customers.

On the left side, we have the things without which we have no product to sell. The Key Activities are the things that need to be done. The Thing needs to be manufactured; the code needs to be written. Perhaps you need a platform for it to run on and a design for the website and the physical product.

Key Resources include the raw materials that you need to create the product but also the people who will help build it. The intellectual resources you have (data and, if you choose to go down that route, patents and copyright) are also valuable, as are the finances required to pay for all this!

Of course, few companies can afford the investment in time and money to do all the Key Activities themselves or even marshal all the Key Resources. (Henry Ford tried hard, but even he didn’t manage.) You will need Key Partners, businesses that are better placed to supply specific skills or resources, because that is their business model, and they are geared up to do it more cheaply or better than you could do yourself. Perhaps you will get an

agency to do your web design and use a global logistics firm to do your deliveries. Will you manufacture everything yourself or get a supplier to create components or even assemble the whole product?

The Cost Structure requires you to put a price on the resources and activities you just defined. Which of them are most expensive? Given the costs you will have, this analysis also helps you determine whether you will be more cost driven (sell cheaply, and in great volume via automation and efficiency) or more value driven (sell a premium product at higher margins, but in smaller quantities).

 

FUNDING AN INTERNET OF THINGS STARTUP

As important as future costs and revenues are to a well-planned business model, there will most likely be a period when you have only costs and no income. The problem of how to get initial funding is a critical one, and looking at several options to deal with it is worthwhile.

If you have enough personal money to concentrate on your new Internet of Things startup full time without taking on extra work, you can, of course, fund your business yourself. Apart from the risk of throwing money into a personal project that has no realistic chance of success (which this chapter’s aim is to avoid!), this would be a very fortunate situation to be in. And

luckier still if you have the surplus money to bankroll costs for materials and staff.

If, like most of the rest of us, you aren’t Bruce Wayne, never fear; there are still ways to kick off a project. If the initial stages don’t require a huge investment of money, your time will be the main limiting factor. If you can’t afford to work full time on your new project, perhaps you can spare a day in the weekend or several evenings after work. You might be able to arrange to work part time on your day job; even an extra afternoon or day might be enough to get things moving. Many people try to combine a startup with a consultancy business, planning to take short but lucrative contracts which support the following period of frantic startup work. Paul Graham advises some caution on this approach, as the easy money from consulting may be too much of a crutch and remove one of the primary motors for a startup,

the fear of failure.

Making sure that you don’t need to spend huge amounts on the startup is key. You probably don’t need an office in the early stages, and perhaps you don’t need expensive Aeron chairs. You can work from your kitchen table, a café, or out of a co-working space.

Everything we’ve discussed in the chapters on prototyping is designed to get a Minimum Viable Product out to show to people and start gathering interest. You can get surprisingly far with a cheap hosting account or a service for deploying apps in the cloud, such as Heroku, an Arduino Ethernet, some basic electronic components, some cardboard, and a knife. Until you get funding, you may be able to scale up your spending on any of these as and when you really need to.

 

DESIGNING PRINTED CIRCUIT BOARDS

Soldering things up is a good step towards making your prototype more robust, because the connections should, if your soldering skills are up to par, mean that you have a solid electrical connection that won’t shake loose. After you’ve done this, you should have something that will survive being given to an end user, unlike a breadboarded prototype which you have to handle with kid gloves. So that means you can just repeat that process for each item you’re building, right?

Well, you could, but you will soon get fed up with soldering each item by hand. Now might be a good time to start recruiting a whole army of people ready to solder things up.

There’s a relatively natural progression to making more professional PCBs.

Moving beyond stripboard, designing and etching your own custom PCBs gives you more options on how to lay out the circuit and makes it easier to solder up as the only holes in the board will be those for components. It also lets you use components that don’t easily fit into the strip board grid pattern of holes, including some of the simpler surface-mount components.

While a big step forwards, homemade boards will still lack that fully professional finish. That’s because they won’t have the green solder mask or silkscreen layers that a PCB manufacturer will give you. Moving to professionally manufactured boards further simplifies the assembly process because the solder mask will make the soldering a bit easier, and, more importantly, the silkscreen provides outlines of where each component should be placed.

If you are doing the PCB assembly or selling the PCBs as part of a kit, you will stick almost exclusively to through-hole components, as they are the easiest for the beginner to solder. You might get away with the occasional surface-mount item, but only if the leads aren’t too fine or closely spaced.

Other concerns effectively force you to move to a custom PCB: if the routing of connections between components is particularly complex, only a multilayer PCB will let you cross connections; if any of your components are available only in surface-mount packages, a custom PCB will let you place them without resorting to additional breakout boards; and if you’ve been using an off-the-shelf microcontroller board (such as an Arduino or a BeagleBone) to provide the processor, and so on, a custom PCB will give you the option of merging that onto your circuit board, removing the need for connectors between the boards and letting you discard any unused components from the off-the-shelf board, thus saving both space on the PCB and the cost of the parts.

The range of options for building a custom PCB runs from etching (or milling) boards yourself, through using one of the many mail-order batch PCB services, to having them made and populated for you. Whichever of those options you choose, the first step in creating your PCB is going to involve designing it. Before we investigate the available software for PCB

design, we should look at what makes up a PCB and some of the terms you are likely to encounter.

The PCB is made up of a number of layers of fibreglass and copper, sandwiched together into the board. The fibreglass provides the main basis for the board but also acts as an insulator between the different layers of copper, and the copper forms the “wires” to connect the components in the circuit together.

Given that you won’t want to connect all the components to each other at the same time, which would happen if you had a solid plate of copper across the whole board, sections of the copper are etched away—usually chemically, but it is possible to use a CNC mill for simple boards. These remaining copper routes are called tracks or traces and make the required connections between the components.

The points on the tracks where they join the leg of a component are known as pads. For surface-mount components, they are just an area of copper on the top or bottom of the board, which provides the place for the component to be soldered. For through-hole connections, they also have a hole drilled through the board for the leg to poke through.

Single-sided boards have only one layer of copper, usually on the bottom of the board; because they’re often for home-made circuits with through-hole components, the components go on the top with their legs poking through the board and soldered on the underside. Double-sided boards, predictably, have two layers of copper: one on the top and one on the bottom. More complicated circuits, particularly as you start to use more advanced processors which come in smaller packages, may need even more layers to allow room to route all the traces round to where they are needed. Three- or five-layer boards aren’t uncommon, and even some seven-layer boards are used for really complicated circuits.

When you get beyond two layers, you run out of surfaces on the board to place the copper layer. Additional layers require a more complex manufacturing procedure in which alternating layers of copper and fibreglass are built up, a bit like you would make a sandwich.

This means that the middle layers are embedded inside the circuit board and so don’t have an accessible area of copper for the pad. Making a connection to one of these layers for through-hole components is easy because the hole that the leg goes through pierces each layer. When the holes drilled through the board are plated—a process in which the walls of the holes are coated in a thin layer of copper—any layers with copper at that point are connected together.

When you need to connect traces on two layers together at a point where there isn’t a hole for the leg of a component, you use a via. This is a similar, though generally smaller, hole through the board purely to connect different layers of copper once plated. You also can use blind vias, which don’t pierce all the way through the board, when you don’t want to connect every layer together; however, because of this, they complicate the PCB manufacturing process and are best avoided unless absolutely necessary.

In places where you have many connections to a common point, rather than run lots of tracks across the circuit, you can more easily devote most of a layer of the board to that connection and just leave it as a filled area of copper. This is known as a plane rather than a trace and is frequently used to provide a route to ground. An added advantage of this approach is that the ground plane provides a way to “catch” some of the stray electromagnetic signals that can result, particularly from high-frequency signal lines. This reduces the amount of electromagnetic interference (EMI) given out by the circuit, which helps prevent problems with other parts of the circuit or with other nearby electronic devices.

The surfaces of professionally manufactured PCBs undergo processes to apply two other finishes which make them easier to use.

First, all the parts of the board and bare copper which aren’t the places where component legs need to be soldered are covered in solder mask. Solder mask is most commonly green, giving the traditional colour of most circuit boards, though other colours are also available. The mask provides a solder-resistant area, encouraging the solder to flow away from those areas and to adhere instead to the places where it is needed to connect the components to the tracks. This reduces the likelihood of a solder joint accidentally bridging across two points in the circuit where it shouldn’t.

Then, on top of the solder mask is the silkscreen. This is a surface finish of paint applied, as the name suggests, via silkscreen printing. It is used to mark out where the components go and label the positions for easy identification of parts. It generally also includes details such as the company or person who designed the board, a name or label to describe what it is for, and the date of manufacture or revision number of the design. This last piece of information is vital; it is more likely than not that you’ll end up with a few iterations of the circuit design as you flush out bugs. Being able to tell one version from the other among the boards on your work bench, or, more importantly, knowing exactly which design is in a product with a user reported fault, is essential.

A good rule of thumb for keeping down the costs of production is to minimise the amount of time a person has to work on each item. Machines tend to be cheaper than people, and the smaller the proportion of labour is in your costs, the more you’ll be able to afford to pay a decent wage to the people who are involved in assembling your devices. “Prototyping the Physical Design”, your design uses some of the newer, digital manufacturing techniques such as laser cutting or 3D printing, you might already have little labour in your assembly process.

However, whilst minimising labour costs is a good target, it’s not the only factor you need to consider in your production recipe; production rates are also important. Though they’re fairly labour free, 3D printers and laser cutters aren’t the fastest of production techniques. Waiting a couple of hours for a print is fine if you just want one, but a production run of a thousand is

either going to take a very long time or require a lot of 3D printers!

To give you a flavour of the sorts of issues involved, we look at what must be the most common method of mass production: injection moulding of plastic.

As the name suggests, the process involves injecting molten plastic into a mould to form the desired shape. After the plastic has cooled sufficiently, the mould is separated and the part is popped out by a number of ejection pins and falls into a collection bin. The whole cycle is automated and takes much less time than a 3D print, which means that thousands of parts can be easily churned out at a low cost per part.

The expensive part of injection moulding is producing the mould in the first place; this is known as tooling up. The moulds are machined out of steel or aluminium and must be carefully designed and polished so that the moulding process works well and the parts come out with the desired surface finish. Any blemishes in the surface of the mould will be transferred to every part produced using them, so you want to get this right. Including a texture to the surface of the part can help mask any imperfections while potentially giving the finished item a better feel. Often for a super-smooth surface, the moulds are finished with a process called electrical discharge machining (EDM), which uses high-voltage sparks to vaporise the surface of the metal and gives a highly polished result.

The mould also needs to include space for the ejection pins to remove the part after it’s made and a route for the plastic to flow into the mould. If you’ve ever put together a model plane or car, you are familiar with those pathways; they’re the excess sprue, the plastic scaffolding that holds each piece together in the kit and that you have to snap away. In assembled products, the parts are naturally removed from the sprue during production.

Like any production technique, injection moulding has its own design considerations. Because the parts need to be extracted from the mould after they’re formed, very sharp corners and completely vertical faces are best avoided. A slight angle, called the draft, from vertical allows for clean parting of the part and its mould, and consistent wall thicknesses avoid warping of the part as it cools.

If you need the thicker walls for strength, an alternative is to use ribs to add rigidity without lots of additional plastic. A look inside some plastic moulded products you already own will show some of the common techniques for achieving maximum strength with a minimum amount of material and also ways to mould mounting points for PCBs or screw holes for holding the assemblies together.

The simplest moulds are called straight-pull and consist of the mould split into two halves. If your design needs to include vertical faces or complex overhangs, more complicated moulds which bring in additional pieces from the side are possible but add to the tooling-up cost.

One way to reduce the tooling-up costs and also increase the production rate is to mould more than one part at a time. If your parts are small enough, you can replicate many of them on one mould or, as we saw in the model aircraft kit, collect lots of different parts together.

In a process known as multishot moulding, you can even share parts of different colours on the same mould. With carefully measured volumes for each part, one of the colours of plastic is injected first to fill the parts which need to be that colour. Then the other colour is injected to fill the remainder of the mould. Obviously, there is a section of the mould cavity where the

different colours mix, but with careful design, that is just part of the sprue and so discarded.

 

CERTIFICATION

One of the less obvious sides of creating an Internet of Things product is the issue of certification. If you forget to make the PCB or write only half of the software for your device, it will be pretty obvious that things aren’t finished when it doesn’t work as intended. Fail to meet the relevant certification or regulations, and your product will be similarly incomplete—but you might not realise that until you send it to a distributor, or worse still, after it is already on sale.

For the main part, these regulations are there for good reason. They make the products you use day in, day out, safer for you to use; make sure that they work properly with complementary products from other suppliers; and ensure that one product doesn’t emit lots of unwanted electromagnetic radiation and interfere with the correct operation of other devices nearby.

You may not have noticed before, but if you take a closer look at any gadget that’s near to hand, you will find a cluster of logos on it somewhere…CE, FCC, UL.… Each of these marks signifies a particular set of regulations and tests that the item has passed: the CE mark for meeting European standards; FCC for US Federal Communications Commission regulations; and UL for independent testing laboratory UL’s tests.

The regulations that your device needs to pass vary depending on its exact functionality, target market (consumer, industrial, and so on), and the countries in which you expect to sell it. Negotiating through all this isn’t for the faint of heart, and the best approach is to work with a local testing facility. They not only are able to perform the tests for you but also are able to advise on which sets of regulations your device falls under and how they vary from country to country.

Such a testing facility subjects your device to a barrage of tests (hopefully) far beyond anything it will encounter in general use. Testers check over the materials specifications to ensure you’re not using paint containing lead; zap it with an 8KV static shock of electricity to see how it copes; subject it to probing with a hot wire—heated to 500 degrees Celsius—to check that it doesn’t go up in flames; and much more.

Of particular interest is the electromagnetic compatibility, or EMC, testing. This tests both how susceptible your device is to interference from other electronic devices, power surges on the main’s electricity supply, and so on, and how much electromagnetic interference your product itself emits.

Electromagnetic interference is the “electrical noise” generated by the changing electrical currents in circuitry. When generated intentionally, it can be very useful: radio and television broadcasts use the phenomenon to transmit a signal across great distances, as do mobile phone networks and any other radio communication systems such as WiFi and ZigBee. The problem arises when a circuit emits a sufficiently strong signal unintentionally which disrupts the desired radio frequencies. This is sometimes noticeable in the “dit, dit-dit-dit” picked up by a poorly insulated stereo just before your mobile phone starts ringing.

All the tests are performed on a DUT, which needs to be the final specification for the entire product. As a result, the testing will most likely be a critical point in your delivery schedule, and any problems discovered will delay shipment while you iterate through a new revision to fix the issues.

For the EMC tests, the device is isolated in an anechoic radio frequency (RF) chamber to minimise the chance of external electromagnetic interference confusing the tests. It is then run through its normal operations while any emissions are monitored by a spectrum analyzer measuring at a distance of 3 metres from the DUT. This test gives the level of RF radiation at the different frequencies specified in the regulations. If any of them are close to the limit, the test is redone with the measurements taken at a distance of 10 metres. The acceptable limits are lower at the greater distance, but that checks how quickly the signal attenuates; with luck, you’ll still be within limits and gain certification.

The resultant test report is added to your technical file, which is referenced by your declaration of conformity. Assembling this is a requirement for certification and documents the key information about your device and the testing it has undergone.

In addition to the test report, you need to gather together PCB layouts, assembly certificates, the certificates for any precertified modules that you have used, and datasheets for critical components. This information is all held in a safe place by the manufacturer (that is, you) in case the authorities need to inspect it.

The location of the technical file is mentioned on the declaration of conformity, which is where you publicly declare to which directives in the regulations your device conforms.

For certain regulations you must also notify a specific, named body; for example, circuits that perform radio communication and so intentionally emit electromagnetic interference must be registered with the FCC when sold in the US. Such registration is in addition to issuing the declaration of conformity for self-certification.

Because of the added complexity and overhead—both administrative and financial—of some of the more involved directives (the intentional emitter rules being a prime example), it is often wise to use pre-approved modules.

You therefore can include a WiFi module (chips, antenna, and associated circuitry), for example, or a mains power adaptor, without having to resubmit to all the relevant testing. As long as you don’t modify the module in any way, the certification done by its manufacturer is deemed sufficient, and you just need to include that in your technical file.

In Europe, you must also register for the Waste Electrical and Electronic Equipment Directive (WEEE Directive). It doesn’t cover any of the technical aspects of products but is aimed instead at reducing the amount of electronic waste that goes to landfill. Each country in the EU has set up a scheme for producers and retailers of electronic and electrical products to encourage more recycling of said items and to contribute towards the cost of doing so.

Retailers can either operate a recycling system in which they accept unwanted electronic devices or join a scheme whose operator takes care of the recycling on their behalf, generally for a membership fee.

In the UK, the Environment Agency maintains a list of schemes that producers can join. Some of the scheme providers have tiers of membership, based on company size or the amount of electrical and electronic equipment being produced. For smaller producers, such as those who generate less than a tone of electronic equipment, there are fixed-price schemes for a few hundred pounds per year. Larger producers report the total weight of devices they’ve

shipped, at regular intervals (usually quarterly), and pay proportionally.

 

SCALING UP SOFTWARE

Producing a physical thing as a prototype or as a manufactured product turn out to be two entirely different propositions. The initial prototype may well be of different size, shape, colour, materials, finish, and quality to what ends up on the shelf. Yet software is intangible and malleable. There are no parts to order, no Bill of Materials to pay for. The bits of information that make up the programs which run in the device or on the Internet are invisible. The software you wrote during project development and what ends up in production will be indistinguishable to the naked eye.

Yet, as with the physical form and electronics of the device, software has to be polished before it can be exposed to the real world. After looking at what is involved in deploying software—both on the embedded device and for any online service you have built—we look at the various factors that require this polish: correctness, maintainability, security, performance, and community.

We only touch upon the issues here, in order to give an awareness of what is involved. The resources you use to learn the particular language and framework you have chosen will cover the ways to build and deploy secure, reliable web services in much more detail.

 

DEPLOYMENT

Copying software from a development machine (your laptop or the team’s source code repository) to where it will be run from in production is typically known as deployment, or sometimes, by analogy to physical products, shipping.

In the case of the firmware running on the microcontroller, this will (hopefully) be done once only, during the manufacture of the device. The software will be flashed to the device in a similar way to how you updated your prototype device during development. For a simple device like the Arduino which usually only runs a single program, the process will be identical to that in development. For a device like a BeagleBone or Raspberry Pi which runs an entire operating system, you will want to “package” the various program code, libraries, and other files you have used in a way that you can quickly and reliably make a new build—that is, install all of it onto a new board with a single command.

The software for an online service will tend to run in a single location. However, it will generally be visible to the whole of the Internet, which means that it will potentially be subject to greater load and to a greater range of accidental or malicious inputs that might cause it to stop working. In addition, the code tends to be more complex and build on more library code; this greater complexity implies a greater potential for bugs. Finally, as a malleable and user-facing software product, there is always the possibility of updating the code to add value by introducing new features or improving the design.

As a result of all of this, it is not merely possible, but necessary, to update the online software components more regularly than those on the device. Having a good deployment process will allow you to do this smoothly and safely. Ideally, with one trigger (such as running a script, or pushing code to a release branch in your code repository), a series of actions will take place which update the software on the server. If you are using a hosted service such as Heroku, there will be simple, standard ways to do this. If you are running your own dedicated web server or perhaps a virtual machine such as an Amazon EC2 instance, there are many solutions, from shell scripts using scp, rsync, or git to copy code, to deployment frameworks such as Capistrano. A tempting option for the near future is Docker.io, which allows the same application you run on your laptop to be packaged up as a virtual “container” that can run unchanged on an Internet-facing server.

 

CORRECTNESS AND MAINTAINABILITY

So, you’ve sold a thousand Internet-connected coffee machines. Congratulations!

Now it’s time to cross your fingers and hope that you don’t get a thousand complaints that it doesn’t actually work. Perhaps it makes cappuccinos when the customer asked it for a latte. Maybe it tweets “Coffee’s on!” when it isn’t, or vice versa.

Clearly, as a publicly available product, your software has to do what you claimed it would, and do it efficiently and safely.

Testing your code before it is deployed is an important step in helping to avoid such a situation, and your chosen language and development framework will have standard and well-understood testing environments to help ease your task.

As it lives in a central place, the server software is easy to update, either to fix bugs or to introduce new features. This is a real boon for web applications, as it removes an entire class of support issues.

The embedded code in the device, however, is particularly important to test, as that is the hardest to update once the product has been sent out to the users. It may be possible, given that the devices will be connected to the Internet anyway, for the code to be updated over-the-air, and that is one of the selling points of the Electric Imp platform, for example. However, it should be approached with some caution: what if a firmware update itself caused a failure of a home heating system? And as Internet of Things products become more integrated into our lives and our homes, they become a tempting target for hackers.

 

Ethical Issues in IoT

 

PRIVACY

The Internet, as a massive open publishing platform, has been a disruptive force as regards the concept of privacy. Everything you write might be visible to anyone online: from minutiae about what you ate for breakfast to blog posts about your work, from articles about your hobbies to Facebook posts about your parties with friends. There is a value in making such data public: the story told on the Internet becomes your persona and defines you in respect of your friends, family, peers, and potential employers. But do you always want people to be able to see that data? With massively increased storage capabilities, this data can be trivially stored and searched. Do you want not just your family and friends but also companies, the government, and the police to be able to see information about you, forever?

A common argument is “if you’ve got nothing to hide, then you’ve got nothing to fear.” There is some element of truth in this, but it omits certain important details, some of which may not apply to you, but apply to someone:

 You may not want your data being visible to an abusive ex-spouse.

You might be at risk of assassination by criminal, terrorist, or state organizations.

You might belong to a group which is targeted by your state (religion, sexuality, political party, journalists).

More prosaically, you change and your persona changes. Yet your past misdemeanours (drunken photos, political statements) may be used against you in the future.

Let’s look now at how the Internet of Things interacts with this topic. As the Internet of Things is about Things, which are rooted in different contexts than computers, it makes uploading data more ubiquitous. Let’s consider the mobile phone, in particular an Internet-connected phone with on-board camera. Although we don’t typically consider phones as Internet of Things devices, the taking of a photo with a camera phone is a quintessential task

for a Thing: whereas in the past you would have had to take a photo, develop it, take the printed photo to your computer, scan it, and then upload it (or take your digital camera to the computer and transfer the photo across via USB), now you can upload that compromising photo, in a single click, while still drunk. The ability to do something is present in a defined context (the personal) rather than locked in a set of multiple processes, culminating in a

general-purpose computer.

Even innocuous photos can leak data. With GPS coordinates (produced by many cameras and most smartphones) embedded into the picture’s EXIF metadata, an analysis of your Flickr/Twitpic/Instagram feed can easily let an attacker infer where your house, your work, or even your children’s school is. Even if you stripped out the data, photo-processing technology enables searching of similar photos, which may include these coordinates or other clues.

Similar issues exist with sports-tracking data, whether produced by an actual Thing, such as Nike+ or a GPS watch, or a pseudo-Thing, like the RunKeeper app on your smartphone. This data is incredibly useful to keep track of your progress, and sharing your running maps, speed, heartbeat, and the like with friends may be motivating. But again, it may be trivial for an attacker to infer where your house is (probably near where you start and finish your run) and get information about the times of day that you are likely to be out of the house.

When we tell family and friends about the Good Night Lamp or the WhereDial, they often bristle and start muttering about “Big Brother”. The idea of people knowing where you are can evoke strong emotions. Yet the idea of knowing that your loved ones are safe is a similarly deep-seated human emotion. To the extent that you allow your location to be shared with people you’ve chosen to share it with, there is no infringement of privacy. But the decision to give your mother a Good Night Lamp might seem less sensible months later when you arrive home late at night. Or you might regret giving your partner a Where Dial if later she becomes jealous and suspicious of your innocent (or otherwise) movements.

Even if these devices are themselves respectful of your privacy, their security or lack thereof might allow an attacker to get information. For example, if it were possible to read an IP packet going from the goodnightlamp.com servers to a household, could you find out that the associated “big lamp” had been switched off? Even if this packet is encrypted, could an attacker infer something by the fact that a packet was sent at all? (That is, will the servers

have to regularly send encrypted “nothing happened” packets?) These risks are to be considered very carefully by responsible makers of Internet of Things devices.

So far we’ve looked at devices that you, as an individual, choose to deploy. But as sensor data is so ubiquitous, it inevitably detects more than just the data that you have chosen to make public. For a start, we saw previously that many “things” have little in their external

form that suggests they are connected to the Internet. When you grab an Internet-connected scarf from the coat rack or sit on an Internet-connected chair, should you have some obvious sign that data will be transmitted or an action triggered? Urbanist and technologist Adam Greenfield has catalogued interactive billboards with hidden cameras which record the demographics of the people who look at them and vending machines which choose the products to display on a similar basis.

Moreover, let us consider the electricity smart meter. The real-time, accurate measurement of electricity has many admirable goals. Understanding usage patterns can help companies to produce electricity at the right times, avoiding overproduction and optimizing efficiency. With humans consuming ever more energy, in a time when our fossil fuel resources are becoming ever more scarce and the impact of using them ever more serious, this is increasingly important. The aggregate data collected by the companies is useful for the noble environmental goals we’ve mentioned...but how about individual data?

If you could mine the data to see subtle peaks, associated with kettles being switched on for tea or coffee, perhaps you could infer what television programmes a household watches. If there are four longer peaks in the morning, this might suggest that four family members are getting up for an electric shower before going to school or work. Now what if you triangulate

this data with some other data—for example, the water meter readings?Smart electricity meters are currently being rolled out across Europe and will, in fact, soon be compulsory. Giovanni Buttarelli, assistant director of the European Data Protection Supervisor, has warned that “together with data from other sources, the potential for extensive data mining is very significant” The idea of analysing multiple huge datasets is now a reality. There are smart algorithms, and there is the computing power to do it. By combining both ends of the long tail (the cheap, ubiquitous Internet of Things devices on the one hand and the expensive, sophisticated, powerful data-mining processors on the other), it is possible to process and understand massive quantities of data.

How powerful this ability will be may well depend on what data you have available to compare. If an electricity supplier was also able to buy data from, say, supermarket loyalty card schemes, the supplier could compare the information from inside the household with the family’s shopping or fuel bills. Of course, it’s currently unlikely that a supermarket would sell that kind of individual data. But as our attitude to privacy changes, it is not outside the realms of possibility.

It is very important to note that even aggregate data can “leak” information. If you can see data collected for a street, for example, then comparing a week when a household is away on holiday with a normal week when they are at home might tell you about their usage. Some very interesting questions can be raised about this: should companies be prevented from trading data with each other? Should there be legal limits to what data can be kept or what analyses performed on it? Or do we have to think the unthinkable and admit that privacy is no longer possible in the face of massive data combined with data mining?

As sensors such as CCTV cameras, temperature meters, footfall counters, and Bluetooth trackers are installed in public and private spaces, from parks to shops, data about you is collected all the time. The term “data subject” has been coined for this purpose. Although you may not own the data collected, you are the subject of it and should have some kind of rights regarding it: transparency over what is collected and what will be done with it, the access to retrieve the data at the same granularity that it was stored, and so on.

CONTROL

Some of the privacy concerns we looked at in the preceding sections really manifest only if the “data subject” is not the one in control of the data. The example of the drunken photo is more sinister if it was posted by someone else, without your permission. This is a form of cyberbullying, which is increasingly prevalent in schools and elsewhere.

Although you, as a loving son/daughter/spouse/parent/friend, may quite reasonably want to share your location or your bedside lamp with your family and friends, what if you are asked to do so? If you are gifted a WhereDial or a Good Night Lamp, is there an expectation that you use it, even if you don’t really want to?

While the technology itself doesn’t cause any controlling behaviour, it could easily be applied by a spouse/parent/employer in ways that manifest themselves as abusive, interfering, or restrictive, in more or less sinister ways. In the case of an employer, we are bound to see cases in the future in which a contractual obligation is needed to share data collected by some

Internet of Things device. We will certainly see legal and ethical discussion about this!

Already, companies and organizations are looking at mashing up data sources and apps and may start to offer financial incentives to use Internet of Things devices: for example, reductions in health insurance if you use an Internet-connected heart monitor, have regular GPS traces on a run-tracking service, or regularly check in to a gym. High-end cars already have Internet connected tracking and security systems which may even be a requisite in

getting insurance at all. And as we saw, smart energy meters are currently moving from a financial incentive to a legal requirement.

As with questions about privacy, there are almost always good reasons for giving up some control. From a state perspective, there may be reasons for collective action, and information required to defend against threats, such as that of terrorism. The threat of one’s country becoming a police state is not merely a technological matter: institutions such as democracy, the right to protest, free press, and international reputation should balance this.

ENVIRONMENT

We have already touched on several environmental issues in the preceding sections, and we’ll come back to the themes of data, control, and the sensor commons. First, let’s look at the classic environmental concerns about the production and running of the Thing itself.

PHYSICAL THING

Creating the object has a carbon cost, which may come from the raw materials used, the processes used to shape them into the shell, the packing materials, and the energy required to ship them from the manufacturing plant to the customer. It’s easier than ever to add up the cost of these emissions: for example, using the ameeConnect API (www.amee.com/pages/api), you can find emissions data and carbon costs for the life-cycle

use of different plastics you might use for 3D printing or injection moulding.

Calculating the energy costs for manufacture is harder. amee’s prototype of an instrumented coffee production line gives a real-time monitoring of the carbon cost of production for each batch and also stamps a summary on each packet, along with a QR code identifying that batch and a more detailed analysis, in the manner of Bruce Sterling’s self-describing “spimes” (Shaping Things, Bruce Sterling, MIT Press, 2005).

You may need to consider other environmental factors, such as emissions produced during normal operation or during disposal of the object. For example, thermal printer paper may contain Bisphenol-A, which has health and environmental concerns. BERG’s Internet of Things product, the Little Printer, is sold using only BPA-free paper, but initial reactions to it suggested that using paper at all is an environmental issue. Of course, a printout the size of a shopping receipt has some carbon cost. On the other hand, the printout will last, perhaps for a long time, whereas a digital device needs to constantly use electricity to display the same information on its LCD. This kind of trade-off may sound like splitting hairs, but these are the sorts of discussions that you may need to have to be able to consider your environmental

cost and to be able to market and defend your product in that area.

In the preceding chapter, we discussed RoHS regulation. Whether or not your market requires you to comply with this European directive, doing so may be the environmentally ethical decision. Nowadays, most consumer electronics do indeed conform to it; health benefits are obtained both at the point of manufacture and at waste disposal/recycling.

SOLUTIONS TO ETHICAL ISSUES

Compared to a simple, physical object, an instrumented Internet of Things device does seem to use vastly more resources in its production, daily use, and waste disposal. Considering our starting point—that this kind of instrumentation is now cheap enough to put everywhere—it seems as though the mass rollout of the Internet of Things will only contribute to environmental issues! Assuming that you want to go ahead with manufacturing a Thing regardless, we hope that you will be aware of the various possibilities and consider ways to reduce your impact and also consider contributing to offsetting schemes.

From a more optimistic point of view, it’s also true that the realisation that the number of Internet-connected devices will be exploding in the coming years is spurring massive research into low-power efficient chips and communications.

THE INTERNET OF THINGS AS PART OF THE SOLUTION

Gavin Starks, former CEO of amee, has spoken convincingly of instrumenting the world precisely to save it. The trade policy scholar Brink Lindsey argued that the 1990s was the “age of abundance”, but now the Economist calls the 2010s the “age of scarcity” in comparison (www.economist.com/node/15404916). Within 13 years, humans will have modified 50 percent of the planet. We are approaching (or have passed) the peak of the planet’s oil reserves. Water may be the next commodity to be fought over. The world’s industrial nations cannot agree on effective ways of turning back anthropogenic climate change.…While Starks’s lectures are timely and necessary, as a good hacker, he prefers to do something about the problem: try to solve it through technology, information, and awareness. We already discussed distributed sensor networks as a social and political act: the potential for global environmental action is also massive. A UN Environment Programme report warns that the lack of reliable and consistent time-series data on the state of the environment is a major barrier to increasing the effectiveness of policies and programmes.

If community-led sensor networks can help supplement government and international science measurements, then we should be doing everything we can to help.

Instrumenting production lines, home energy usage, transport costs, building energy efficiency, and all other sources of efficiency might seem extreme, but it may be a vital, imperative task.

Other technologies which aren’t principally linked with the Internet of Things will also be important. If 67 percent of the world’s water usage is in agriculture, then are there ways to reduce that quantity through technology? Meat farming uses a disproportionate amount of resources, so perhaps the latest advances in lab-grown meat will be critical. Even here, instrumenting the supply chains, measuring to be certain that new methods really are more

efficient, and reducing inefficiencies by automation could well use Internet of Things solutions to help measure and implement the solutions. The Internet of Things could become a core part of the solution to our potentially massive environmental problems.

Projects such as a carbon score for every company in the UK will help change attitudes, perhaps simply by gamifying the process of improving one’s emissions, but also by having an objective measure that could, in future, be as important to a company’s success as its credit score.

In the face of these suggestions—collective sensor networks and massive business process engineering not for profit but for environmental benefits— you might wonder whether these calls to action amount to critiques of capitalism. Is the status quo, capitalism as-is, still viable as the global operating system? Of course, capitalism’s great success has always been how it routes around problems and finds a steady state which is the most efficient to the market. There is no reason why capitalism as-could-be should not be part of the process of striving towards efficiency on an environmental as well as monetary level.

There is a real sense that the technology we have discussed in this book could be revolutionary. Adam Greenfield has used the iconography of Occupy in discussing citizens’ uses of the Internet of Things. Rob van Kranenburg has similarly called to “Occupy the [Internet of Things] gateways” with open source software and hardware (www.designspark.

com/blog/an-open-internet-of-things).

Van Kranenburg also makes alternative, starker proposals: not only may privacy become obsolete, but even those currently personal possessions such as cars might also become communal, through the increasing move from ownership to rental models. Why have the inefficiency of a car for every

person, when your apartment block could have enough cars for all, from city run-arounds, to a few four-wheel-drive cars and formal cars too, to be used as needed? As resources become ever scarcer, a greater percentage of income might be spent on covering rental of all goods—cars, food, possibly even housing. This kind of futurology leads to scenarios such as the death of money itself: a fixed proportion of income to rent needed services from a commercial supplier is more or less indistinguishable from taxation to pay for communal services. Whether the death of privacy and of money sounds like utopia or dystopia to you, it is worth considering the impact tomorrow of the technologies we implement to deal with the problems of today.

As a counterpoint to these messages of doom, Russell Davies of London’s Really Interesting Group (RIG) often tries to bring the discussion of Things back to fun. Although this may not sound as engaged or political an attitude, by looking for the unintended uses for technologies, the end users, rather than the political elites, can turn them into platforms for human expression. Davies makes the examples of Christmas lights for house-fronts being repurposed to animate singalongs, something that the manufacturers could never have imagined! Similarly, the World Wide Web was originally conceived to share academic papers but has taken on the roles of brokering business on the one hand and publishing pictures of kittens on the other without breaking its step. The Internet of Things will also, if we let it, become a platform for whatever people want it to be. Although this may be less important than saving our species from environmental disaster, perhaps it is no less ethical in terms of asserting our humanity through, and not simply in spite of, the technology that we might have feared would dehumanise us.

CAUTIOUS OPTIMISM

Between the tempting extremes of technological Luddism and an unquestioning positive attitude is the approach that we prefer: one of cautious optimism. Yes, the Luddites were right—technology did change the world that they knew, for the worse, in many senses. But without the changes that disrupted and spoilt one world, we wouldn’t have arrived at a world, our world, where magical objects can speak to us, to each other, and to vastly powerful machine intelligences over the Internet.

It is true that any technological advance could be co-opted by corporations, repressive governments, or criminals. But (we hope) technology can be used socially, responsibly, and (if necessary) subversively, to mitigate against this risk. Although the Internet of Things can be, and we hope will always be, fun, being aware of the ethical issues around it, and facing them responsibly, will help make it more sustainable and more human too.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

MATRUSRI ENGINEERING COLLEGE

SAIDABAD, HYDERABAD – 500 059

Department of Computer Science and Engineering

                                                                   I  Internal Assessment                     

Class/Branch: BE VII -SEM                                                                  Max Marks: 20M

Subject:  Fundamentals of IoT                 Code: OE773EC                 Duration: 60M

Teacher: Mr.V.Karunakar Reddy

Answer all questions from PART-A and any Two from PART-B    

 

Q.No

Question

Marks

CO

BL

PART-A                                                                                                                                                           6X1=6Marks

1.

Examine main parts of IoT system?

2

CO1

L4

2.

Differentiate open source vs closed source?

2

CO2

L5

3.

Define CLOCKODILLO

2

CO3

L2

PART-B                                                                                                                                                           2X7=14Marks

4.

(a) Describe an example of IoT Service that uses Web Socket-based communication?

3

CO1

L2

(b) Determine the IoT-Levels for detecting structural Health monitoring system?

4

CO1

L5

5.

(a) How do you prototype the physical design of IoT using CNC Milling?

4

CO2

L5

(b) What are the different application layer protocols used in IoT?

3

CO2

L3

6.

(a) Describe different communication APIs used in IoT?

4

CO1

L3

(b) What are the embedded computing devices used in developing an IoT device?

 

3

CO2

L4

 

 Bloom's level wise Marks Distribution                                           Course Outcome wise Marks Distribution

                                     

BL- Bloom's Taxonomy Levels [1-Remember, 2-Understand, 3-Apply, 4-Analyze, 5-Evaluate, 6- Create]

 

 

 

 

 

1.      Examine main parts of IoT system?

             Ans: IoT system consists of three main parts:Sensors, Network connectivity&Data storage applications.

  1. Differentiate open source vs closed source?

Open source software (OSS) refers to the software which uses the code freely available on the Internet.  The code can be copied, modified or deleted by other users and organizations. As the software is open to the public, the result is that it constantly updates, improves and expands as more people can work on its improvement.

Closed source software (CSS) is opposite to OSS and means the software which uses the proprietary and closely guarded code. Only the original authors of software can access, copy, and alter that software. In a case with closed source software, you are not purchasing the software, but only pay to use it.

 

3.      Define CLOCKODILLO?

Clockodillo is an Internet-connected task timer. The user can set a dial to a number of minutes, and the timer ticks down until completed. It also sends messages to an API server to let it know that a task has been started, completed, or cancelled

 

 

Long questions:

  1. Describe an example of IoT Service that uses Web Socket-based communication?

Web –socket APIs allow bi-directional, full duplex communication between clients and servers. These donot require a new connection to be set up for each message to be sent. It begins with a connection setup request sent by the client to the server. This request is sent over HTTP and the server interprets as an upgrade request.

 

After the connection is established the client and server can send data/messages to each other in full duplex mode. Websocket API s reduce the network traffic and latenct for connection set-up and termination requests for each message.

 

Determine the IoT-Levels for detecting structural Health monitoring system?

Structural health monitoring systems uses a network of sensors to monitor the vibration levels in the structures such as bridges and buildings. The data collected from these sensors is analysed to assess the health of the structures to detect cracks and mechanical breakdown, locate the damages to a structure and also calculate the remaining life of the structure.

 

Let us consider an example of Iot level-6 systems for SHM. The systems consists of multiple nodes placed in differnent locations for monitoring damages, breakdowns. The end nodes send the data to the cloud in real time using web socket service. The data is stored in the cloud storage. SHM systems uses a large number of wireless sensor nodes which are powered by tradetional batteries.

  1. How do you prototype the physical design of IoT using CNC Milling?


Computer Numerically Controlled (CNC) milling is similar to 3D printing
but is a subtractive manufacturing process rather than additive. The CNC
part just means that a computer controls the movement of the milling head,
much like it does the extruder in an FDM 3D printer. However, rather than
building up the desired model layer by layer from nothing, it starts with a
block of material larger than the finished piece and cuts away the parts
which aren’t needed—much like a sculptor chips away at a block of stone to
reveal the statue, except that milling uses a rotating cutting bit  rather than a chisel.


CNC mills can work with a muchgreater range of materials than 3D printers can. CNC mills can also be used for more specialised tasks, such as creating custom printed circuit boards.An advantage of milling over etching the board is that you can have the mill drill any holes for components or mounting atthe same time, saving you from having to do it manually afterwards with
your drill press.

The main attribute that varies among CNC mills is the number of axes of movement they have:

2.5 axis: Whilst this type has three axes of movement—X, Y, and Z—it can move only any two at one time.

3 axis: Like the 2.5-axis machine, this machine has a bed which can move in the X and Y axes, and a milling head that can move in the Z. However, it can move all three at the same time

4 axis: This machine adds a rotary axis to the 3-axis mill to allow the piece being milled to be rotated around an extra axis, usually the X (this is known as the A axis). An indexed axis just allows the piece to be rotated to set points to allow a further milling pass to then be made, for example, to flip it over to mill the underside; and a fully controllable rotating axis allows the rotation to happen as part of the cutting instructions.

5 axis: Tis machine adds a second rotary axis—normally around the Y—which is known as the B axis.

6 axis: A third rotary axis—known as the C axis if it rotates around Z—completes the range of movement in this machine.

The software used for CNC milling is split into two types:

CAD (Computer-Aided Design) software lets you design the model.

CAM (Computer-Aided Manufacture) software turns that into a suitable tool path—a list of co-ordinates for the CNC machine to follow which will result in the model being revealed from the block of material.

What are the different application layer protocols used in IoT?

 

HTTP

 

The Internet is much more than just “the web”, but inevitably web servicescarried over HTTP hold a large part of our attention when looking at theInternet of Things.HTTP is, at its core, a simple protocol. The client requests a resource bysending a command to a URL, with some headers. We use the currentversion of HTTP, 1.1, in these examples. Let’s try to get a simple documentat http://book.roomofthings.com/hello.txt. You can see theresult if you open the URL in your web browser.

A Browser showing “Hello World”

 

what the browser is actually sending to the server to do this.The basic structure of the request would look like this:GET /hello.txt HTTP/1.1Host: book.roomofthings.com

Notice how the message is written in plain text, in a human-readable waywe specified the GET method because we’re simply getting the page.

The Host header is the only required headerin HTTP 1.1. It is used to let a web server that serves multiple virtual hostspoint the request to the right place.Well-written clients, such as your web browser, pass other headers. Forexample, my browser sends the following request:

HTTPS: ENCRYPTED HTTP

We have seen how the request and response are created in a simple textformat. If someone eavesdropped your connection (easy to do with toolssuch as Wireshark if you have access to the network at either end), thatperson can easily read the conversation. In fact, it isn’t the format of theprotocol that is the problem: even if the conversation happened in binary, anattacker could write a tool to translate the format into something readable.Rather, the problem is that the conversation isn’t encrypted.The HTTPS protocol is actually just a mix-up of plain old HTTP over theSecure Socket Layer (SSL) protocol. An HTTPS server listens to a differentport (usually 443) and on connection sets up a secure, encrypted connectionwith the client (using some fascinating mathematics and clever tricks such asthe “Diffie–Hellman key exchange”). When that’s established, both sides justspeak HTTP to each other as before!

This means that a network snooper can find out only the IP address and portnumber of the request (because both of these are public information in theenvelope of the underlying TCP message, there’s no way around that). Afterthat, all it can see is that packets of data are being sent in a request andpackets are returned for the response.

  1. Describe different communication APIs used in IoT?

Request Response: Request-Response is a communication model in which the
client sends requests to the server and the server responds to the requests. When
the server receives a request, it decides how to respond, fetches the data, retrieves
resource representations, prepares the response, and then sends the response to
the client. Request-Response model is a stateless communication model and each
request-response pair is independent of others.

Publish-Subscribe: Publish-Subscribe is a communication model that involves
publishers, brokers and consumers. Publishers are the source of data. Publishers send
the data to the topics which are managed by the broker. Publishers are not aware of
the consumers. Consumers subscribe to the topics which are managed by the broker
when the broker receives data for a topic from the publisher, it sends the data to all the
subscribed consumers.

      Push-pull: push-pull is a communication model in which data producers push the data to            the queues and the consumers pull the data from the queues. Producers do not need to be aware of the consumers.

Exclusive pair: is a bidirectional, full duplex communication model that uses a persistent connection between the client and server. Once the connection is setup it remains open until the client sends to close the connection.

What are the embedded computing devices used in developing an IoT device?

 

MICROCONTROLLER’s

Microcontrollers are verylimited in their capabilities—which is why 8-bit microcontrollers are still in use although the price of 32-bit microcontrollers is now dropping to the
level where they’re starting to be edged out. Usually, they offer RAM
capabilities measured in kilobytes and storage in the tens of kilobytes.


SYSTEM-ON-CHIPS
In between the low-end microcontroller and a full-blown PC sits the SoC
like the microcontroller,these SoCs combine a processor and a number of peripherals onto a single
chip but usually have more capabilities. The processors usually range from a
few hundred megahertz, nudging into the gigahertz for top-end solutions,
and include RAM measured in megabytes rather than kilobytes. Storage for
SoC modules tends not to be included on the chip, with SD cards being a
popular solution.


Processor Speed

The processor speed, or clock speed, of your processor tells you how fast it
can process the individual instructions in the machine code for the program
its running. Naturally, a faster processor speed means that it can execute
instructions more quickly. Microcontrollers tend to be clocked at
speeds in the tens of MHz, whereas SoCs run at hundreds of MHz or
possibly low GHz.If your device will be crunching lots of
data—for example, processing video in real time—then you’ll be looking at a
SoC platform.


RAM
RAM provides the working memory for the system. If you have more RAM,
you may be able to do more things or have more flexibility over your choice
of coding algorithm. If you’re handling large datasets on the device, that
could govern how much space you need. You can oftenfind ways to work
around memory limitations, either in code  or by handing off processing to an online service
microcontrollers with less than1KB of RAM are unlikely to be of interest, and if you want to run standardencryption protocols, you will need at least 4KB, and preferably more.
For SoC boards, particularly if you plan to run Linux as the operating
system, we recommend at least 256MB.


Networking

How your device connects to the rest of the world is a key consideration for
Internet of Tings products. Wired Ethernet is often the simplest for the
user—generally plug and play—and cheapest, but it requires a physical cable.
Wireless solutions obviously avoid that requirement but introduce a more
complicated configuration.


Wi-Fi is the most widely deployed to provide an existing infrastructure for
connections, but it can be more expensive and less optimized for power
consumption than some of its competitors.


Other short-range wireless can offer better power-consumption profiles or
costs than Wi-Fi but usually with the trade-off of lower bandwidth. ZigBee is
one such technology, aimed particularly at sensor networks and scenarios
such as home automation. The recent Bluetooth LE protocol (also known as
Bluetooth 4.0) has a very low power-consumption profile similar to ZigBee’s
and could see more rapid adoption due to its inclusion into standard
Bluetooth chips included in phones and laptops.


USB
If your device can rely on a more powerful computer being nearby, tethering
to it via USB can be an easy way to provide both power and networking.

Power Consumption

Faster processors are often more power hungry than slower ones. For devices
which might be portable or rely on an unconventional power supply (batteries,
solar power) depending on where they are installed, power consumption may
be an issue. Even with access to mains electricity, the power consumption
may be something to consider because lower consumption may be a desirable
feature.

However, processors may have a minimal power-consumption sleep mode.
Tis mode may allow you to use a faster processor to quickly perform
operations and then return to low-power sleep. Therefore, a more powerful
processor may not be a disadvantage even in a low-power embedded device.

Interfacing with Sensors and Other Circuitry

In addition to talking to the Internet, your device needs to interact with
something else—either sensors to gather data about its environment; or
motors, LEDs, screens, and so on, to provide output. You could connect to
the circuitry through some sort of peripheral bus—SPI and I2C being
common ones—or through ADC or DAC modules to read or write varying
voltages; or through generic GPIO pins, which provide digital on/o
inputs
or outputs.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SHORT QUESTIONS:

1.      Why do IoT systems have to be self-adapting and self-configuring?

Self-adapting: IoT devices and systems may have the capability to
dynamically adapt with the changing contexts and take actions based on their operating
conditions, user's context, or sensed environment.

Self-configuring: IoT devices have self-configuring capability, allowing a large
number of devices to work together to provide certain functionality), setup the networking, and fetch latest software upgrades with minimal manual or user intervention.

2.      What does an Internet Protocol Suite consists of?

 

 

3.      Examine main parts of IoT system?

             Ans: IoT system consists of three main parts:Sensors, Network connectivity&Data storage applications.

4.      Define Sensors and Actuators?

Sensors: Sensors are the ways of getting information into your device,
finding out things about your surroundings.

Actuators: Actuators are the outputs for the device—the motors, lights,
and so on, which let your device do something to the outside world.

5.      What are the functions carried out by an IoT?

(a)    Forwarding packets between LAN and WAN on the IP layer

(b)   Performing application layer functions between IoT nodes and other entities.

(c)    Enabling local, short range communication between IoT devices

 

6.       Differentiate open source vs closed source?

Open source software (OSS) refers to the software which uses the code freely available on the Internet.  The code can be copied, modified or deleted by other users and organizations. As the software is open to the public, the result is that it constantly updates, improves and expands as more people can work on its improvement.

Closed source software (CSS) is opposite to OSS and means the software which uses the proprietary and closely guarded code. Only the original authors of software can access, copy, and alter that software. In a case with closed source software, you are not purchasing the software, but only pay to use it.

 

Long questions:

Describe an example of IoT Service that uses Web Socket-based communication?

Web –socket APIs allow bi-directional, full duplex communication between clients and servers. These donot require a new connection to be set up for each message to be sent. It begins with a connection setup request sent by the client to the server. This request is sent over HTTP and the server interprets as an upgrade request.

After the connection is established the client and server can send data/messages to each other in full duplex mode. Websocket API s reduce the network traffic and latenct for connection set-up and termination requests for each message.

Determine the IoT-Levels for detecting structural Health monitoring system?

Structural health monitoring systems uses a network of sensors to monitor the vibration levels in the structures such as bridges and buildings. The data collected from these sensors is analysed to assess the health of the structures to detect cracks and mechanical breakdown, locate the damages to a structure and also calculate the remaining life of the structure.

Let us consider an example of Iot level-6 systems for SHM. The systems consists of multiple nodes placed in differnent locations for monitoring damages, breakdowns. The end nodes send the data to the cloud in real time using web socket service. The data is stored in the cloud storage. SHM systems uses a large number of wireless sensor nodes which are powered by tradetional batteries.

Describe different communication APIs used in IoT?

Request Response: Request-Response is a communication model in which the
client sends requests to the server and the server responds to the requests. When
the server receives a request, it decides how to respond, fetches the data, retrieves
resource representations, prepares the response, and then sends the response to
the client. Request-Response model is a stateless communication model and each
request-response pair is independent of others.

Publish-Subscribe: Publish-Subscribe is a communication model that involves
publishers, brokers and consumers. Publishers are the source of data. Publishers send
the data to the topics which are managed by the broker. Publishers are not aware of
the consumers. Consumers subscribe to the topics which are managed by the broker
when the broker receives data for a topic from the publisher, it sends the data to all the
subscribed consumers.

      Push-pull: push-pull is a communication model in which data producers push the data to            the queues and the consumers pull the data from the queues. Producers do not need to be aware of the consumers.

Exclusive pair: is a bidirectional, full duplex communication model that uses a persistent connection between the client and server. Once the connection is setup it remains open until the client sends to close the connection.

 

What are the embedded computing devices used in developing an IoT device?

 

MICROCONTROLLER’s

Microcontrollers are verylimited in their capabilities—which is why 8-bit microcontrollers are still in use although the price of 32-bit microcontrollers is now dropping to the
level where they’re starting to be edged out. Usually, they offer RAM
capabilities measured in kilobytes and storage in the tens of kilobytes.


SYSTEM-ON-CHIPS
In between the low-end microcontroller and a full-blown PC sits the SoC
like the microcontroller,these SoCs combine a processor and a number of peripherals onto a single
chip but usually have more capabilities. The processors usually range from a
few hundred megahertz, nudging into the gigahertz for top-end solutions,
and include RAM measured in megabytes rather than kilobytes. Storage for
SoC modules tends not to be included on the chip, with SD cards being a
popular solution.


Processor Speed

The processor speed, or clock speed, of your processor tells you how fast it
can process the individual instructions in the machine code for the program
its running. Naturally, a faster processor speed means that it can execute
instructions more quickly. Microcontrollers tend to be clocked at
speeds in the tens of MHz, whereas SoCs run at hundreds of MHz or
possibly low GHz.If your device will be crunching lots of
data—for example, processing video in real time—then you’ll be looking at a
SoC platform.


RAM
RAM provides the working memory for the system. If you have more RAM,
you may be able to do more things or have more flexibility over your choice
of coding algorithm. If you’re handling large datasets on the device, that
could govern how much space you need. You can oftenfind ways to work
around memory limitations, either in code  or by handing off processing to an online service
microcontrollers with less than1KB of RAM are unlikely to be of interest, and if you want to run standardencryption protocols, you will need at least 4KB, and preferably more.
For SoC boards, particularly if you plan to run Linux as the operating
system, we recommend at least 256MB.


Networking

How your device connects to the rest of the world is a key consideration for
Internet of Tings products. Wired Ethernet is often the simplest for the
user—generally plug and play—and cheapest, but it requires a physical cable.
Wireless solutions obviously avoid that requirement but introduce a more
complicated configuration.


Wi-Fi is the most widely deployed to provide an existing infrastructure for
connections, but it can be more expensive and less optimized for power
consumption than some of its competitors.


Other short-range wireless can offer better power-consumption profiles or
costs than Wi-Fi but usually with the trade-off of lower bandwidth. ZigBee is
one such technology, aimed particularly at sensor networks and scenarios
such as home automation. The recent Bluetooth LE protocol (also known as
Bluetooth 4.0) has a very low power-consumption profile similar to ZigBee’s
and could see more rapid adoption due to its inclusion into standard
Bluetooth chips included in phones and laptops.


USB
If your device can rely on a more powerful computer being nearby, tethering
to it via USB can be an easy way to provide both power and networking.

Power Consumption


Faster processors are often more power hungry than slower ones. For devices
which might be portable or rely on an unconventional power supply (batteries,
solar power) depending on where they are installed, power consumption may
be an issue. Even with access to mains electricity, the power consumption
may be something to consider because lower consumption may be a desirable
feature.

However, processors may have a minimal power-consumption sleep mode.
Tis mode may allow you to use a faster processor to quickly perform
operations and then return to low-power sleep. Therefore, a more powerful
processor may not be a disadvantage even in a low-power embedded device.

Interfacing with Sensors and Other Circuitry


In addition to talking to the Internet, your device needs to interact with
something else—either sensors to gather data about its environment; or
motors, LEDs, screens, and so on, to provide output. You could connect to
the circuitry through some sort of peripheral bus—SPI and I2C being
common ones—or through ADC or DAC modules to read or write varying
voltages; or through generic GPIO pins, which provide digital on/off inputs
or outputs. Different microcontrollers or SoC solutions offer different
mixtures of these interfaces in differing numbers.

 

 

 

 

What are the different application layer protocols used in IoT?

 

HTTP

 

The Internet is much more than just “the web”, but inevitably web servicescarried over HTTP hold a large part of our attention when looking at theInternet of Things.HTTP is, at its core, a simple protocol. The client requests a resource bysending a command to a URL, with some headers. We use the currentversion of HTTP, 1.1, in these examples. Let’s try to get a simple documentat http://book.roomofthings.com/hello.txt. You can see theresult if you open the URL in your web browser.

A Browser showing “Hello World”

 

what the browser is actually sending to the server to do this.The basic structure of the request would look like this:GET /hello.txt HTTP/1.1Host: book.roomofthings.com

Notice how the message is written in plain text, in a human-readable waywe specified the GET method because we’re simply getting the page.

The Host header is the only required headerin HTTP 1.1. It is used to let a web server that serves multiple virtual hostspoint the request to the right place.Well-written clients, such as your web browser, pass other headers. Forexample, my browser sends the following request:

 

HTTPS: ENCRYPTED HTTP

We have seen how the request and response are created in a simple textformat. If someone eavesdropped your connection (easy to do with toolssuch as Wireshark if you have access to the network at either end), thatperson can easily read the conversation. In fact, it isn’t the format of theprotocol that is the problem: even if the conversation happened in binary, anattacker could write a tool to translate the format into something readable.Rather, the problem is that the conversation isn’t encrypted.The HTTPS protocol is actually just a mix-up of plain old HTTP over theSecure Socket Layer (SSL) protocol. An HTTPS server listens to a differentport (usually 443) and on connection sets up a secure, encrypted connectionwith the client (using some fascinating mathematics and clever tricks such asthe “Diffie–Hellman key exchange”). When that’s established, both sides justspeak HTTP to each other as before!

This means that a network snooper can find out only the IP address and portnumber of the request (because both of these are public information in theenvelope of the underlying TCP message, there’s no way around that). Afterthat, all it can see is that packets of data are being sent in a request andpackets are returned for the response.

 

How do you prototype the physical design of IoT using CNC Milling?


Computer Numerically Controlled (CNC) milling is similar to 3D printing
but is a subtractive manufacturing process rather than additive. The CNC
part just means that a computer controls the movement of the milling head,
much like it does the extruder in an FDM 3D printer. However, rather than
building up the desired model layer by layer from nothing, it starts with a
block of material larger than the finished piece and cuts away the parts
which aren’t needed—much like a sculptor chips away at a block of stone to
reveal the statue, except that milling uses a rotating cutting bit  rather than a chisel.


CNC mills can work with a muchgreater range of materials than 3D printers can. CNC mills can also be used for more specialised tasks, such as creating custom printed circuit boards.An advantage of milling over etching the board is that you can have the mill drill any holes for components or mounting atthe same time, saving you from having to do it manually afterwards with
your drill press.

The main attribute that varies among CNC
mills is the number of axes of movement they have:

 

2.5 axis: Whilst this type has three axes of movement—X, Y, and Z—it
can move only any two at one time.

3 axis: Like the 2.5-axis machine, this machine has a bed which can
move in the X and Y axes, and a milling head that can move in the Z.
However, it can move all three at the same time

4 axis: This machine adds a rotary axis to the 3-axis mill to allow the
piece being milled to be rotated around an extra axis, usually the X (this
is known as the A axis). An indexed axis just allows the piece to be
rotated to set points to allow a further milling pass to then be made, for
example, to flip it over to mill the underside; and a fully controllable
rotating axis allows the rotation to happen as part of the cutting
instructions.

5 axis: Tis machine adds a second rotary axis—normally around the
Y—which is known as the B axis.

6 axis: A third rotary axis—known as the C axis if it rotates around
Z—completes the range of movement in this machine.

 

The software used for CNC milling is split into twotypes:

CAD (Computer-Aided Design) software lets you design the model.

CAM (Computer-Aided Manufacture) software turns that into a
suitable toolpath—a list of co-ordinates for the CNC machine to follow
which will result in the model being revealed from the block of
material.

SET-2   QUESTIONS (only these two are changed in set-1. Rest of them are same)

LONG:

Describe an example of IoT Service that uses REST-based communication API?

RESET-based Communication APIs:

Representational state transfer (REST) is a set of architectural principles by which you can design Web services the Web APIs that focus on systems’ resources and how resource states are addressed and transferred. REST APIs that follow the request response communication model, the rest architectural constraint apply to the components, connector and data elements, within a distributed hypermedia system.  The rest architectural constraint are as follows:

 

Client-server – The principle behind the client-server constraint is the separation of concerns. for example clients should not be concerned with the storage of data which is concern of the serve. Similarly the server should not be concerned about the user interface, which is concern of the client.  Separation allows client and server to be independently developed and updated.

 

Stateless – Each request from client to server must contain all the information necessary to understand the request, and cannot take advantage of any stored context on the server. The session state is kept entirely on the client.

 

 

 

 

Cache-able – Cache constraints requires that the data within a response to a request be implicitly or explicitly levelled as cache-able or non-cache-able. If a response is cache-able, then a client cache is given the right to reuse that response data for later, equivalent requests. Caching can partially or completely eliminate some instructions and improve efficiency and scalability.

 

Layered system – layered system constraints, constrains the behaviour of components such that each component cannot see beyond the immediate layer with they are interacting. For example, the client cannot tell whether it is connected directly to the end server or two an intermediary along the way. System scalability can be improved by allowing intermediaries to respond to requests instead of the end server, without the client having to do anything different.

 

Uniform interface – uniform interface constraints requires that the method of communication between client and server must be uniform. Resources are identified in the requests (by URIsin web based systems) and are themselves is separate from the representations of the resources data returned to the client. When a client holds a representation of resources it has all the information required to update or delete the resource you (provided the client has required permissions). Each message includes enough information to describe how to process the message.

 

Code on demand – Servers can provide executable code or scripts for clients to execute in their context. This constraint is the only one that is optional.

 

Explain about Cloud Computing IoT enabling technology?

 

Cloud computing is a trans-formative computing paradigm that involves delivering applications and services over the Internet Cloud computing involves provisioning of computing, networking and storage resources on demand and providing these resources as metered services to the users, in a “pay as you go” model.  C loud computing resources can be provisioned on demand by the users, without requiring interactions with the cloud service Provider. The process of provisioning resources is automated. Cloud computing resources can be accessed over the network using standard access mechanisms that provide platform independent access through the use of heterogeneous client platforms such as the workstations, laptops, tablets and smartphones.

 

Cloud computing services are offered to users in different forms:

 

Infrastructure as a Service (IaaS): provides the users the ability to provision computing and storage resources. These resources are provided to the users as virtual machine instances and virtual storage.

Platform as a Service (PaaS): provides the users the ability to develop and deploy application in the cloud using the development tools, application programming interfaces (APIs), software libraries. The users themselves are responsible for developing, deploying, configuring and managing applications on the cloud infrastructure.

Software as a Service (SaaS): provides the users a complete software application or the user interface to the application itself. SaaS are platform independent and can be accessed from various client devices such as laptops, tablets and smartphones running different operating systems, i.e the users can access the application from anywhere.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

MATRUSRI ENGINEERING COLLEGE

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

SlipTest - I

Subject: Fundamental of IoT (OE 702EC)                               Duration: 40 Min

Class: BE VII SEM (CSE A& B)                                              Max Marks: 10

Academic year: 2019-2020                 Name of the Faculty:  Mr.V.Karunakar Reddy

Answer All Questions  Part –A (Short answer questions)                                         (2x1=2M)

1. Why do IoT systems have to be self adapting and self-configuring?[CO1][L2]

Ans: - Self-Adapting: IoT devices and systems may have the capability to dynamically adapt with the changing contexts and take actions based on their operating conditions, user's context, or sensed environment.

Self-configuring IoT devices may have self-configuring capability, allowing a large number of devices to work together to provide certain functionality (such as weather monitoring). These devices have the ability configure themselves in association with the IoT infrastructure), setup the networking, and fetch latest software upgrades with minimal manual or user intervention.

 

2. Examine main parts of IoT system?                                         [CO1][L4]

Ans: - IoT system consists of three main parts:

                     Sensors

                Network connectivity

                Data storage applications

 

Part –B (Long Answer questions)                               (2x4=8M)

3. Describe an example of IoT Service that uses WebSocket-based communication?  [CO1][L2]  

Ans:-Web –socket APIs allows bi-directional, full duplex communication between clients and servers. These donot require a new connection to be set up for each message to be sent. It begins with a connection setup request sent by the client to the server. This request is sent over HTTP and the server interprets as an upgrade request.

 

After the connection is established the client and server can send data/messages to each other in full duplex mode. Websocket API s reduce the network traffic and latenct for connection set-up and termination requests for each message.

                                                                                                                         

4. Determine the IoT-Levels for detecting structural Health monitoring system? [CO1][L5]

Ans:- Structural health monitoring systems uses a network of sensors to monitor the vibration levels in the structures such as bridges and buildings. The data collected from these sensors is analysed to assess the health of the structures to detect cracks and mechanical breakdown, locate the damages to a structure and also calculate the remaining life of the structure.

 

Let us consider an example of Iot level-6 systems for SHM. The system consists of multiple nodes placed in differnent locations for monitoring damages, breakdowns. The end nodes send the data to the cloud in real time using web socket service. The data is stored in the cloud storage. SHM systems uses a large number of wireless sensor nodes which are powered by tradetional batteries.

 

 

 

 

 

 

 

 

 

MATRUSRI ENGINEERING COLLEGE

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Quiz - I

Subject: Fundamental of IoT (OE 702EC)                                                Duration: 10 Min

Class:  BE VII SEM (CSE A& B)                                                      Max Marks: 5M

Academic year: 2019-2020                                              Name of the Faculty: Mr.V.Karunakar Reddy

                                    

                                                                MULTIPLE CHOICE QUESTIONS                        (10X0.5) = 5M

1. The main functions of IoT?                                                                                     [d]              a) Forwarding packets between LAN and WAN on the IP layer b) performing application layer functions between IoT nodes and other entities. c) Enabling local, short range communication between IoT devices

d) All the above

2. M2M is a term introduced by?                                                                                 [c]

a) IoT Services provides b) fog computing service provides c) Telecommunication Service Provides

d) None of these

3. The IPv4 addressing capacity                                                                                              [c]

a) 2^6 b) 2^16 c) 2^32 d) 2^128

4. The IPv6 addressing capacity                                                                                              [d]

a) 2^6 b) 2^16 c) 2^32 d) 2^128

 

5. Which of the following is true                                                                                 [b]

a) IoT is Subset of M2M b) M2M is Subset of IoT c) IoT is Subset of CPS (cyber physical system)        d) CPS is Subset of WOT (web of things)

   

6. Which of this statement regards sensor is TRUE?                                                    [d]

a) Sensors are input devices     b)Sensors can be analog as well as digital c) Sensors respond to some     external stimuli d) All of these.

 

7. Which statement is NOT TRUE?                                                                              [d]

a) IoT WAN connect various network segments b) IoT WAN geographically wide c) IoT WAN is organizationally wide d) None of these

 

8. A mechanical actuator converts?                                                                             [c]

a) Rotary motion into electrical power b) electrical power into rotary motion c) rotary motion into liner motion d) Liner motion into rotary motion

 

9. Temperature, Speed, Pressure, Displacement and Strain are                                                [c]

a) Analog quantity b) Digital quantity c) Some time analog and some time digital d) None of these

 

10. Based on the data type, sensors can be classified in which of the two categories? [a]

a) Analog and Digital   b) Isomorphic and Homomorphic c) Scalar and Vector d) Solid and Liquid

 

Video Lectures

https://www.youtube.com/watch?v=AvMyEOpOzqw