Thursday, October 31, 2019

Why did the suffragette movement in London turn violent in 1908 Essay

Why did the suffragette movement in London turn violent in 1908 - Essay Example The Daily Mail in London, on 10th January 1906, used the word â€Å"suffragette† to identify those women who adopted the method of action and violence in their campaign for their right to vote. â€Å"Suffragists† was used for those women who adopted peaceful and conventional methods in their campaign. Although the women had been fighting for their right to vote ever since the 1860s, the movement gained its momentum under the leadership of Emmeline Pankhurst and her two daughters Christabel and Sylvia. The Pankhursts’ established the Women’s Social and Political Union (WSPU) in 1903. According to June Purvis (2003), Emmeline was moved by the plight of women in poverty. She believed that the only way the women could gain their rights in the society was through the right to vote. As June Purvis (2003) writes, Emmeline believed that â€Å"women.. had to form their own independent movement if the vote was to be won and to find new ways of breathing life into the women’s suffrage campaign.† Pankhurst disagreed with the ways of the NUWSS led by Millicent Garrett Fawcett. NUWSS had adopted peaceful and â€Å"ladylike† conventional methods of campaigning and also recruited men for various positions. According to Purvis (2003), â€Å"Emmeline was convince d that a fresh approach was needed and that women had to do the work themselves.† This led to the formation of the WSPU. It was made clear that the WSPU would be different from the NUWSS. They pledged to limit the membership to women only and to be satisfied with nothing but action. As Purvis (2003) writes, â€Å"Deeds, not words† was adopted as the main motto of the WSPU.

Tuesday, October 29, 2019

Families of color creating harmony and optical illusions Essay Example for Free

Families of color creating harmony and optical illusions Essay Modern television production, music videos and movies rely on influential power of colors to capture and hold an audience. Glowing spell bounding colors perceived by bright sunlight originated with advanced study of basic art concepts. In the 1600s, Newton invented the famous color wheel, providing the standard guideline for combining colors creating a multicolored pleasant visual appeal, beginning with three basic colors. The color wheel breaks color down by category, forming families of color. As long as the colors on the color wheel form grey shades when mixed together, they are considered to belong to the same family. This is what is meant by family of colors. The categories of colors are identified as primary, secondary or tertiary, complementary, split complementary, analogue, and triad. Primary colors on the color wheel consists of only three colors; red, blue and yellow. From these three basic colors, all other color combinations are created. Secondary colors are mixed primary colors. For example, mixing two primary colors, red and blue, makes secondary purple. Hue is defined the way color is seen or two colored visual effects. Hues are two toned colors, red-green and blue-yellow are most commonly used. Complimentary colors are directly opposite of each other on the color wheel. Analogue colors are a combination of any three colors as long as they are next to each other on the color wheel. Triad colors are equally distant colors. Once the artists understands thoroughly how to coordinate colors using the color wheel, then optical illusions and harmony can be formed. Color harmony is a combination of colors complimenting each other to create a pleasant visual image, or a complete picture. To understand how the color wheel may be used to create harmony, the wheel breaks the harmony down ever further than the basic colors in a chart called a histogram. â€Å"Harmonic colors hold a specific relationship by their position within a color space. † (King, 2002). Monochromatic (a small slice of the color wheel of adjacent hues), Complementary (two-color scheme on opposite sides), Split complementary, split, and four-tone chord. (King 2002) Hue histogram is a diagram showing which colors belong to the same family, and which colors contrasts. After the color specialists decide on a specific, chosen colors are mixed determining what degrees the cells are tinted. Colored cells shades and colors are called pixel value. This is most time consuming, part of the image making process, but also the most significant process contributing to the visual appeal. Making the process easier and more thorough is the hue histogram. Hue histogram uses alphabetical angels; i I V l T Y X N. http://www. websiteoptimization. com/speed/tweak/color-harmony/ This histogram is used to create harmony and create optical illusion image. Color harmony is used to create a picture, optical illusions uses color to make the picture appear as a moving image. In Victor Vasarely, optical illusion image the Orion C, he used shapes and contours with color. The colors may have belonged to the same family of colors, but many of these were not hues, or laying next to each other. In the center, he used light blue next to light pink. He used wide range of colors far apart on the histogram but all belonging to the same family of colors. Normally, black and white are not considered colors, only shades. He used plenty white to give the illusions of squares moving into each other. In the Orion C, the viewer can look at one square, and before they know it, they find themselves looking at another square. http://www. artinthepicture. com/artists/Victor_Vasarely/ In Bearden J Moods, Music and Life image, the artist used color harmony. The ranges selected from the color wheel where colors very close together or next to each other, called hues. Of course not all of the colors used where hues, but they did not range more than 3 shades apart. Colors were selected to distinguish the difference of the objects. http://www. nga. gov/education/classroom/bearden/musli1. shtm Hue histograms are used by color technicians providing lifelike and mood enhancing images, videos and movies. When using the hue histogram, it is important to realize the alphabetical angles can move. The V on the color gram can move to cover different shades, but the size of the angel cannot be widened to include more colors. If the artists is to create harmony, they must follow the rules. Sometimes contrasting images are desired instead of harmony. For the image to have a pleasant visual appeal, color rules still apply. Contrast would use the T-shape. Even in complex images, everything starts with three basic colors, using the wheel. In the 1600s, Newton invented the famous color wheel, providing the standard guideline for combining colors creating a multicolored pleasant visual appeal, beginning with three basic colors. References The art of Romare Bearden, A resource for teachers; http://www. nga. gov/education/classroom/bearden/musli1. shtm Art in the picture; Victor_Vasraely; http://www. artinthepicture. com/artists/Victor_Vasarely/ King, 2002, Automated Color Harmony Tools, http://www. websiteoptimization. com/speed/tweak/color-harmony/

Sunday, October 27, 2019

Technology for Network Security

Technology for Network Security 2.0 CHAPTER TWO 2.1 INTRODUCTION The ever increasing need for information technology as a result of globalisation has brought about the need for an application of a better network security system. It is without a doubt that the rate at which computer networks are expanding in this modern time to accommodate higher bandwidth, unique storage demand, and increase number of users can not be over emphasised. As this demand grows on daily bases, so also, are the threats associated with it. Some of which are, virus attacks, worm attacks, denial of services or distributed denial of service attack etc. Having this in mind then call for swift security measures to address these threats in order to protect data reliability, integrity, availability and other needed network resources across the network. Generally, network security can simply be described as a way of protecting the integrity of a network by making sure authorised access or threats of any form are restricted from accessing valuable information. As network architecture begins to expand, tackling the issue of security is becomes more and more complex to handle, therefore keeping network administrators on their toes to guard against any possible attacks that occurs on daily basis. Some of the malicious attacks are viruses and worm attacks, denial of service attacks, IP spoofing, cracking password, Domain Name Server (DNS) poisoning etc. As an effort to combat these threats, many security elements have been designed to tackle these attacks on the network. Some of which includes, firewall, Virtual Private Network (VPN), Encryption and Decryption, Cryptography, Internet Protocol Security (IPSec), Data Encryption Standard (3DES), Demilitarised Zone, (DMZ), Secure Shell Layer (SSL) etc. This chapter starts by briefly discussi ng Internet Protocol (IP), Transmission Control Protocol (TCP), User datagram Protocol (UDP), Internet Control Message Protocol (ICMP), then discussed the Open system interconnection (OSI) model and the protocols that operate at each layer of the model, network security elements, followed by the background of firewall, types and features of firewalls and lastly, network security tools. 2.2 A BRIEF DESCRIPTION OF TCP, IP, UDP AND ICMP 2.2.1 DEFINITION Going by the tremendous achievement of the World Wide Web (internet), a global communication standard with the aim of building interconnection of networks over heterogeneous network is known as the TCP/IP protocol suite was designed (Dunkels 2003; Global Knowledge 2007; Parziale et al 2006). The TCP/IP protocol suite is the core rule used for applications transfer such as File transfers, E-Mail traffics, web pages transfer between hosts across the heterogeneous networks (Dunkels 2003; Parziale et al 2006). Therefore, it becomes necessary for a network administrator to have a good understanding of TCP/IP when configuring firewalls, as most of the policies are set to protect the internal network from possible attacks that uses the TCP/IP protocols for communication (Noonan and Dobrawsky 2006). Many incidents of network attacks are as a result of improper configuration and poor implementation TCP/IP protocols, services and applications. TCP/IP make use of protocols such as TCP, UDP, IP, ICMP etc to define rules of how communication over the network takes place (Noonan and Dobrawsky 2006). Before these protocols are discussed, this thesis briefly looks into the theoretical Open Systems Interconnection (OSI) model (Simoneau 2006). 2.2.2 THE OSI MODEL The OSI model is a standardised layered model defined by International Organization for Standardization (ISO) for network communication which simplifies network communication to seven separate layers, with each individual layer having it own unique functions that support immediate layer above it and at same time offering services to its immediate layer below it (Parziale et al 2006; Simoneau 2006). The seven layers are Application, Presentation, Session Transport, Network, Data, Link and Physical layer. The first three lower layers (Network, Data, Link and Physical layer) are basically hardware implementations while the last four upper layers (Application, Presentation, Session and Transport) are software implementations. Application Layer This is the end user operating interface that support file transfer, web browsing, electronic mail etc. This layer allows user interaction with the system. Presentation Layer This layer is responsible for formatting the data to be sent across the network which enables the application to understand the message been sent and in addition it is responsible for message encryption and decryption for security purposes. Session Layer This layer is responsible for dialog and session control functions between systems. Transport layer This layer provides end-to-end communication which could be reliable or unreliable between end devices across the network. The two mostly used protocols in this layer are TCP and UDP. Network Layer This layer is also known as logical layer and is responsible for logical addressing for packet delivery services. The protocol used in this layer is the IP. Data Link Layer This layer is responsible for framing of units of information, error checking and physical addressing. Physical Layer This layer defines transmission medium requirements, connectors and responsible for the transmission of bits on the physical hardware (Parziale et al 2006; Simoneau 2006). 2.2.3 INTERNET PROTOCOL (IP) IP is a connectionless protocol designed to deliver data hosts across the network. IP data delivery is unreliable therefore depend on upper layer protocol such as TCP or lower layer protocols like IEEE 802.2 and IEEE802.3 for reliable data delivery between hosts on the network.(Noonan and Dobrawsky 2006) 2.2.4 TRANSMISSION CONTROL PROTOCOL (TCP) TCP is a standard protocol which is connection-oriented transport mechanism that operates at the transport layer of OSI model. It is described by the Request for Comment (RFC) 793. TCP solves the unreliability problem of the network layer protocol (IP) by making sure packets are reliably and accurately transmitted, errors are recovered and efficiently monitors flow control between hosts across the network. (Abie 2000; Noonan and Dobrawsky 2006; Simoneau 2006). The primary objective of TCP is to create session between hosts on the network and this process is carried out by what is called TCP three-way handshake. When using TCP for data transmission between hosts, the sending host will first of all send a synchronise (SYN) segment to the receiving host which is first step in the handshake. The receiving host on receiving the SYN segment reply with an acknowledgement (ACK) and with its own SYN segment and this form the second part of the handshake. The final step of the handshake is the n completed by the sending host responding with its own ACK segment to acknowledge the acceptance of the SYN/ACK. Once this process is completed, the hosts then established a virtual circuit between themselves through which the data will be transferred (Noonan and Dobrawsky 2006). As good as the three ways handshake of the TCP is, it also has its short comings. The most common one being the SYN flood attack. This form of attack occurs when the destination host such as the Server is flooded with a SYN session request without receiving any ACK reply from the source host (malicious host) that initiated a SYN session. The result of this action causes DOS attack as destination host buffer will get to a point it can no longer take any request from legitimate hosts but have no other choice than to drop such session request (Noonan and Dobrawsky 2006). 2.2.5 USER DATAGRAM PROTOCOL (UDP) UDP unlike the TCP is a standard connectionless transport mechanism that operates at the transport layer of OSI model. It is described by the Request for Comment (RFC) 768 (Noonan and Dobrawsky 2006; Simoneau 2006). When using UDP to transfer packets between hosts, session initiation, retransmission of lost or damaged packets and acknowledgement are omitted therefore, 100 percent packet delivery is not guaranteed (Sundararajan et al 2006; Postel 1980). UDP is designed with low over head as it does not involve initiation of session between hosts before data transmission starts. This protocol is best suite for small data transmission (Noonan and Dobrawsky 2006). 2.2.6 INTERNET CONTROL MESSAGE PROTOCOL (ICMP). ICMP is primarily designed to identify and report routing error, delivery failures and delays on the network. This protocol can only be used to report errors and can not be used to make any correction on the identified errors but depend on routing protocols or reliable protocols like the TCP to handle the error detected (Noonan and Dobrawsky 2006; Dunkels 2003). ICMP makes use of the echo mechanism called Ping command. This command is used to check if the host is replying to network traffic or not (Noonan and Dobrawsky 2006; Dunkels 2003). 2.3 OTHER NETWORK SECURITY ELEMENTS. 2.3.1 VIRTUAL PRIVATE NETWORK (VPN) VPN is one of the network security elements that make use of the public network infrastructure to securely maintain confidentiality of information transfer between hosts over the public network (Bou 2007). VPN provides this security features by making use of encryption and Tunneling technique to protect such information and it can be configured to support at least three models which are Remote- access connection. Site-to-site ( branch offices to the headquarters) Local area network internetworking (Extranet connection of companies with their business partners) (Bou 2007). 2.3.2 VPN TECHNOLOGY VPN make use of many standard protocols to implement the data authentication (identification of trusted parties) and encryption (scrambling of data) when making use of the public network to transfer data. These protocols include: Point-to-Point Tunneling Protocol PPTP [RFC2637] Secure Shell Layer Protocol (SSL) [RFC 2246] Internet Protocol Security (IPSec) [RFC 2401] Layer 2 Tunneling Protocol (L2TP) [RFC2661] 2.3.2.1 POINT-TO-POINT TUNNELING PROTOCOL [PPTP] The design of PPTP provides a secure means of transferring data over the public infrastructure with authentication and encryption support between hosts on the network. This protocol operates at the data link layer of the OSI model and it basically relies on user identification (ID) and password authentication for its security. PPTP did not eliminate Point-to-Point Protocol, but rather describes better way of Tunneling PPP traffic by using Generic Routing Encapsulation (GRE) (Bou 2007; Microsoft 1999; Schneier and Mudge 1998). 2.3.2.2 LAYER 2 TUNNELING PROTOCOL [L2TP] The L2TP is a connection-oriented protocol standard defined by the RFC 2661which merged the best features of PPTP and Layer 2 forwarding (L2F) protocol to create the new standard (L2TP) (Bou 2007; Townsley et al 1999). Just like the PPTP, the L2TP operates at the layer 2 of the OSI model. Tunneling in L2TP is achieved through series of data encapsulation of the different levels layer protocols. Examples are UDP, IPSec, IP, and Data-Link layer protocol but the data encryption for the tunnel is provided by the IPSec (Bou 2007; Townsley et al 1999). 2.3.2.3 INTERNET PROTOCOL SECURITY (IPSEC) [RFC 2401] IPSec is a standard protocol defined by the RFC 2401 which is designed to protect the payload of an IP packet and the paths between hosts, security gateways (routers and firewalls), or between security gateway and host over the unprotected network (Bou 2007; Kent and Atkinson 1998). IPSec operate at network layer of the OSI model. Some of the security services it provides are, authentication, connectionless integrity, encryption, access control, data origin, rejection of replayed packets, etc (Kent and Atkinson 1998). 2.3.3.4 SECURE SOCKET LAYER (SSL) [RFC 2246] SSL is a standard protocol defined by the RFC 2246 which is designed to provide secure communication tunnel between hosts by encrypting hosts communication over the network, to ensure packets confidentiality, integrity and proper hosts authentication, in order to eliminate eavesdropping attacks on the network (Homin et al 2007; Oppliger et al 2008). SSL makes use of security elements such as digital certificate, cryptography and certificates to enforce security measures over the network. SSL is a transport layer security protocol that runs on top of the TCP/IP which manage transport and routing of packets across the network. Also SSL is deployed at the application layer OSI model to ensure hosts authentication (Homin et al 2007; Oppliger et al 2008; Dierks and Allen 1999). 2.4 FIREWALL BACKGROUND The concept of network firewall is to prevent unauthorised packets from gaining entry into a network by filtering all packets that are coming into such network. The word firewall was not originally a computer security vocabulary, but was initially used to illustrate a wall which could be brick or mortar built to restrain fire from spreading from one part of a building to the other or to reduce the spread of the fire in the building giving some time for remedial actions to be taken (Komar et al 2003). 2.4.1BRIEF HISTORY OF FIREWALL Firewall as used in computing is dated as far back as the late 1980s, but the first set of firewalls came into light sometime in 1985, which was produced by a Ciscos Internet work Operating System (IOS) division called packet filter firewall (Cisco System 2004). In 1988, Jeff Mogul from DEC (Digital Equipment Corporation) published the first paper on firewall. Between 1989 and 1990, two workers of the ATT Bell laboratories Howard Trickey and Dave Persotto initiated the second generation firewall technology with their study in circuit relays called Circuit level firewall. Also, the two scientists implemented the first working model of the third generation firewall design called Application layer firewalls. Sadly enough, there was no published documents explaining their work and no product was released to support their work. Around the same year (1990-1991), different papers on the third generation firewalls were published by researchers. But among them, Marcus Ranums work received the most attention in 1991 and took the form of bastion hosts running proxy services. Ranums work quickly evolved into the first commercial product—Digital Equipment Corporations SEAL product (Cisco System 2004). About the same year, work started on the fourth generation firewall called Dynamic packet filtering and was not operational until 1994 when Check Point Software rolled out a complete working model of the fourth generation firewall architecture. In 1996, plans began on the fifth generation firewall design called the Kernel Proxy architecture and became reality in 1997 when Cisco released the Cisco Centri Firewall which was the first Proxy firewall produced for commercial use (Cisco System 2004). Since then many vendor have designed and implemented various forms of firewall both in hardware and software and till date, research works is on going in improving firewalls architecture to meet up with ever increasing challenges of network security. 2.5 DEFINITION According to the British computer society (2008), Firewalls are defence mechanisms that can be implemented in either hardware or software, and serve to prevent unauthorized access to computers and networks. Similarly, Subrata, et al (2006) defined firewall as a combination of hardware and software used to implement a security policy governing the flow of network traffic between two or more networks. The concept of firewall in computer systems security is similar to firewall built within a building but differ in their functions. While the latter is purposely designed for only one task which is fire prevention in a building, computer system firewall is designed to prevent more than one threat (Komar et al 2003).This includes the following Denial Of Service Attacks (DoS) Virus attacks Worm attack. Hacking attacks etc 2.5.1 DENIAL OF SERVICE ATTACKS (DOS) â€Å"Countering DoS attacks on web servers has become a very challenging problem† (Srivatsa et al 2006). This is an attack that is aimed at denying legitimate packets to access network resources. The attacker achieved this by running a program that floods the network, making network resources such as main memory, network bandwidth, hard disk space, unavailable for legitimate packets. SYN attack is a good example of DOS attacks, but can be prevented by implementing good firewall polices for the secured network. A detailed firewall policy (iptables) is presented in chapter three of this thesis. 2.5.2 VIRUS AND WORM ATTACKS Viruses and worms attacks are big security problem which can become pandemic in a twinkle of an eye resulting to possible huge loss of information or system damage (Ford et al 2005; Cisco System 2004). These two forms of attacks can be programs designed to open up systems to allow information theft or programs that regenerate themselves once they gets into the system until they crashes the system and some could be programmed to generate programs that floods the network leading to DOS attacks. Therefore, security tools that can proactively detect possible attacks are required to secure the network. One of such tools is a firewall with good security policy configuration (Cisco System 2004). Generally speaking, any kind of firewall implementation will basically perform the following task. Manage and control network traffic. Authenticate access Act as an intermediary Make internal recourses available Record and report event 2.5.3 MANAGE AND CONTROL NETWORK TRAFFIC. The first process undertaken by firewalls is to secure a computer networks by checking all the traffic coming into and leaving the networks. This is achieved by stopping and analysing packet Source IP address, Source port, Destination IP address, Destination port, IP protocol Packet header information etc. in order decide on what action to take on such packets either to accept or reject the packet. This action is called packet filtering and it depends on the firewall configuration. Likewise the firewall can also make use of the connections between TCP/IP hosts to establish communication between them for identification and to state the way they will communicate with each other to decide which connection should be permitted or discarded. This is achieved by maintaining the state table used to check the state of all the packets passing through the firewall. This is called stateful inspection (Noonan and Dobrawsky 2006). 2.5.4 AUTHENTICATE ACCESS When firewalls inspects and analyses packets Source IP address, Source port, Destination IP address, Destination port, IP protocol Packet header information etc, and probably filters it based on the specified security procedure defined, it does not guarantee that the communication between the source host and destination host will be authorised in that, hackers can manage to spoof IP address and port action which defeats the inspection and analysis based on IP and port screening. To tackle this pit fall over the network, an authentication rule is implemented in firewall using a number of means such as, the use of username and password (xauth), certificate and public keys and pre-shared keys (PSKs).In using the xauth authentication method, the firewall will request for the source host that is trying to initiate a connection with the host on the protected network for its username and password before it will allow connection between the protected network and the source host to be establi shed. Once the connection is been confirmed and authorised by the security procedure defined, the source host need not to authenticate itself to make connection again (Noonan and Dobrawsky 2006). The second method is using certificates and public keys. The advantage of this method over xauth is that verification can take place without source host intervention having to supply its username and password for authentication. Implementation of Certificates and public keys requires proper hosts (protected network and the source host) configuration with certificates and firewall and making sure that protected network and the source host use a public key infrastructure that is properly configured. This security method is best for big network design (Noonan and Dobrawsky 2006). Another good way of dealing with authentication issues with firewalls is by using pre-shared keys (PSKs). The implementation of PSKs is easy compare to the certificates and public keys although, authentication still occur without the source host intervention its make use of an additional feature which is providing the host with a predetermined key that is used for the verification procedure (Noonan and Dobrawsky 2006). 2.5.5 ACT AS AN INTERMEDIARY When firewalls are configured to serve as an intermediary between a protected host and external host, they simply function as application proxy. The firewalls in this setup are configured to impersonate the protected host such that all packets destined for the protected host from the external host are delivered to the firewall which appears to the external host as the protected host. Once the firewalls receive the packets, they inspect the packet to determine if the packet is valid (e.g. genuine HTTT packet) or not before forwarding to the protected host. This firewall design totally blocks direct communication between the hosts. 2.5.6 RECORD AND REPORT EVENTS While it is good practise to put strong security policies in place to secure network, it is equally important to record firewalls events. Using firewalls to record and report events is a technique that can help to investigate what kind of attack took place in situations where firewalls are unable to stop malicious packets that violate the access control policy of the protected network. Recording this event gives the network administrator a clear understanding of the attack and at the same time, to make use of the recorded events to troubleshoot the problem that as taken place. To record these events, network administrators makes use of different methods but syslog or proprietary logging format are mostly used for firewalls. However, some malicious events need to be reported quickly so that immediate action can be taken before serious damage is done to the protected network. Therefore firewalls also need an alarming mechanism in addition to the syslog or proprietary logging format whe n ever access control policy of the protected network is violated. Some types of alarm supported by firewalls include Console notification, Simple Network Management Protocol (SNMP), Paging notification, E-mail notification etc (Noonan and Dobrawsky 2006). Console notification is a warning massage that is presented to the firewall console. The problem with this method of alarm is that, the console needs to be monitored by the network administrator at all times so that necessary action can be taken when an alarm is generated. Simple Network Management Protocol (SNMP) notification is implemented to create traps which are transferred to the network management system (NMS) monitoring the firewall. Paging notification is setup on the firewall to deliver a page to the network administrator whenever the firewall encounters any event. The message could be an alphanumeric or numeric depending on how the firewall is setup. E-mail notification is similar to paging notification, but in this case, the firewall send an email instead to proper address. 2.6 TYPES OF FIREWALLS Going by firewall definition, firewalls are expected to perform some key functions like, Application Proxy, Network Translation Address, and Packet filtering. 2.6.1 APPLICATION PROXY This is also known as Application Gateway, and it acts as a connection agent between protected network and the external network. Basically, the application proxy is a host on the protected network that is setup as proxy server. Just as the name implies, application proxy function at the application layer of the Open System Interconnection (OSI) model and makes sure that all application requests from the secured network is communicated to the external network through the proxy server and no packets passes through from to external network to the secured network until the proxy checks and confirms inbound packets. This firewall support different types of protocols such as a Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP) and Simple Mail Transport Protocol (SMTP) (Noonan and Dobrawsky 2006; NetContinuum 2006). 2.6.2 NETWORK ADDRESS (NAT) NAT alter the IP addresses of hosts packets by hiding the genuine IP addresses of secured network hosts and dynamically replacing them with a different IP addresses (Cisco System 2008; Walberg 2007). When request packets are sent from the secured host through the gateway to an external host, the source host address is modified to a different IP address by NAT.  When the reply packets arrives at the gateway, the NAT then replaces the modified address with genuine host address before forwarding it to the host (Walberg 2007).The role played by NAT in a secured network system makes it uneasy for unauthorized access to know: The number of hosts available in the protected network The topology of the network The operating systems the host is running The type of host machine (Cisco System 2008). 2.6.3 PACKET FILTERING. â€Å"Firewalls and IPSec gateways have become major components in the current high speed Internet infrastructure to filter out undesired traffic and protect the integrity and confidentiality of critical traffic† (Hamed and Al-Shaer 2006). Packet filtering is based on the lay down security rule defined for any network or system. Filtering traffic over the network is big task that involves comprehensive understanding of the network on which it will be setup. This defined policy must always be updated in order to handle the possible network attacks (Hamed and Al-Shaer 2006). 2.6.4 INSTRUCTION DETECTION SYSTEMS. Network penetration attacks are now on the increase as valuable information is being stolen or damaged by the attacker. Many security products have been developed to combat these attacks. Two of such products are Intrusion Prevention systems (IPS) and Intrusion Detection Systems (IDS). IDS are software designed to purposely monitor and analysed all the activities (network traffic) on the network for any suspicious threats that may violate the defined network security policies (Scarfone and Mell 2007; Vignam et al 2003). There are varieties of methods IDS uses to detect threats on the network, two of them are, anomaly based IDS, and signature based IDS. 2.6.4.1 ANOMALY BASED IDS Anomaly based IDS is setup to monitor and compare network events against what is defined to be normal network activities which is represented by a profile, in order to detect any deviation from the defined normal events. Some of the events are, comparing the type of bandwidth used, the type of protocols etc and once the IDS identifies any deviation in any of this events, it notifies the network administrator who then take necessary action to stop the intended attack (Scarfone and Mell 2007). 2.6.4.2 SIGNATURE BASED IDS Signature based IDS are designed to monitor and compare packets on the network against the signature database of known malicious attacks or threats. This type of IDS is efficient at identifying already known threats but ineffective at identifying new threats which are not currently defined in the signature database, therefore giving way to network attacks (Scarfone and Mell 2007). 2.6.5 INTRUSION PREVENTION SYSTEMS (IPS). IPS are proactive security products which can be software or hardware used to identify malicious packets and also to prevent such packets from gaining entry in the networks (Ierace et al 2005, Botwicz et al 2006). IPS is another form of firewall which is basically designed to detect irregularity in regular network traffic and likewise to stop possible network attacks such as Denial of service attacks. They are capable of dropping malicious packets and disconnecting any connection suspected to be illegal before such traffic get to the protected host. Just like a typical firewall, IPS makes use of define rules in the system setup to determine the action to take on any traffic and this could be to allow or block the traffic. IPS makes use of stateful packet analysis to protect the network. Similarly, IPS is capable of performing signature matching, application protocol validation etc as a means of detecting attacks on the network (Ierace et al 2005). As good as IPS are, they also have t heir downsides as well. One of it is the problem of false positive and false negative. False positive is a situation where legitimate traffic is been identified to be malicious and thereby resulting to the IPS blocking such traffic on the network. False negative on the other hand is when malicious traffic is be identified by the IPS as legitimate traffic thereby allowing such traffic to pass through the IPS to the protected network (Ierace N et al 2005). 2.7 SOFTWARE AND HARDWARE FIREWALLS 2.7.1 SOFTWARE FIREWALLS Software-based firewalls are computers installed software for filtering packets (Permpootanalarp and Rujimethabhas 2001). These are programs setup either on personal computers or on network servers (Web servers and Email severs) operating system. Once the software is installed and proper security polices are defined, the systems (personal computers or servers) assume the role of a firewall. Software firewalls are second line of defence after hardware firewalls in situations where both are used for network security. Also software firewalls can be installed on different operating system such as, Windows Operating Systems, Mac operating system, Novel Netware, Linux Kernel, and UNIX Kernel etc. The function of these firewalls is, filtering distorted network traffic. There are several software firewall some of which include, Online Armor firewall, McAfee Personal Firewall, Zone Alarm, Norton Personal Firewall, Black Ice Defender, Sygate Personal Firewall, Panda Firewall, The DoorStop X Fi rewall etc (Lugo Parker 2005). When designing a software firewall two keys things are considered. These are, per-packet filtering and a per-process filtering. The pre-packet filter is design to search for distorted packets, port scan detection and checking if the packets are accepted into the protocol stack. In the same vein, pre-process filter is the designed to check if a process is allowed to begin a connection to the secured network or not (Lugo and Parker 2005). It should be noted that there are different implantations of all Firewalls. While some are built into the operating system others are add-ons. Examples of built-in firewalls are windows based firewall and Linux based. 2.7.2 WINDOWS OPERATING SYSTEM BASED FIREWALL. In operating system design, security features is one important aspect that is greatly considered. This is a challenge the software giant (Microsoft) as always made sure they implement is their products. In the software industry, Mi Technology for Network Security Technology for Network Security 2.0 CHAPTER TWO 2.1 INTRODUCTION The ever increasing need for information technology as a result of globalisation has brought about the need for an application of a better network security system. It is without a doubt that the rate at which computer networks are expanding in this modern time to accommodate higher bandwidth, unique storage demand, and increase number of users can not be over emphasised. As this demand grows on daily bases, so also, are the threats associated with it. Some of which are, virus attacks, worm attacks, denial of services or distributed denial of service attack etc. Having this in mind then call for swift security measures to address these threats in order to protect data reliability, integrity, availability and other needed network resources across the network. Generally, network security can simply be described as a way of protecting the integrity of a network by making sure authorised access or threats of any form are restricted from accessing valuable information. As network architecture begins to expand, tackling the issue of security is becomes more and more complex to handle, therefore keeping network administrators on their toes to guard against any possible attacks that occurs on daily basis. Some of the malicious attacks are viruses and worm attacks, denial of service attacks, IP spoofing, cracking password, Domain Name Server (DNS) poisoning etc. As an effort to combat these threats, many security elements have been designed to tackle these attacks on the network. Some of which includes, firewall, Virtual Private Network (VPN), Encryption and Decryption, Cryptography, Internet Protocol Security (IPSec), Data Encryption Standard (3DES), Demilitarised Zone, (DMZ), Secure Shell Layer (SSL) etc. This chapter starts by briefly discussi ng Internet Protocol (IP), Transmission Control Protocol (TCP), User datagram Protocol (UDP), Internet Control Message Protocol (ICMP), then discussed the Open system interconnection (OSI) model and the protocols that operate at each layer of the model, network security elements, followed by the background of firewall, types and features of firewalls and lastly, network security tools. 2.2 A BRIEF DESCRIPTION OF TCP, IP, UDP AND ICMP 2.2.1 DEFINITION Going by the tremendous achievement of the World Wide Web (internet), a global communication standard with the aim of building interconnection of networks over heterogeneous network is known as the TCP/IP protocol suite was designed (Dunkels 2003; Global Knowledge 2007; Parziale et al 2006). The TCP/IP protocol suite is the core rule used for applications transfer such as File transfers, E-Mail traffics, web pages transfer between hosts across the heterogeneous networks (Dunkels 2003; Parziale et al 2006). Therefore, it becomes necessary for a network administrator to have a good understanding of TCP/IP when configuring firewalls, as most of the policies are set to protect the internal network from possible attacks that uses the TCP/IP protocols for communication (Noonan and Dobrawsky 2006). Many incidents of network attacks are as a result of improper configuration and poor implementation TCP/IP protocols, services and applications. TCP/IP make use of protocols such as TCP, UDP, IP, ICMP etc to define rules of how communication over the network takes place (Noonan and Dobrawsky 2006). Before these protocols are discussed, this thesis briefly looks into the theoretical Open Systems Interconnection (OSI) model (Simoneau 2006). 2.2.2 THE OSI MODEL The OSI model is a standardised layered model defined by International Organization for Standardization (ISO) for network communication which simplifies network communication to seven separate layers, with each individual layer having it own unique functions that support immediate layer above it and at same time offering services to its immediate layer below it (Parziale et al 2006; Simoneau 2006). The seven layers are Application, Presentation, Session Transport, Network, Data, Link and Physical layer. The first three lower layers (Network, Data, Link and Physical layer) are basically hardware implementations while the last four upper layers (Application, Presentation, Session and Transport) are software implementations. Application Layer This is the end user operating interface that support file transfer, web browsing, electronic mail etc. This layer allows user interaction with the system. Presentation Layer This layer is responsible for formatting the data to be sent across the network which enables the application to understand the message been sent and in addition it is responsible for message encryption and decryption for security purposes. Session Layer This layer is responsible for dialog and session control functions between systems. Transport layer This layer provides end-to-end communication which could be reliable or unreliable between end devices across the network. The two mostly used protocols in this layer are TCP and UDP. Network Layer This layer is also known as logical layer and is responsible for logical addressing for packet delivery services. The protocol used in this layer is the IP. Data Link Layer This layer is responsible for framing of units of information, error checking and physical addressing. Physical Layer This layer defines transmission medium requirements, connectors and responsible for the transmission of bits on the physical hardware (Parziale et al 2006; Simoneau 2006). 2.2.3 INTERNET PROTOCOL (IP) IP is a connectionless protocol designed to deliver data hosts across the network. IP data delivery is unreliable therefore depend on upper layer protocol such as TCP or lower layer protocols like IEEE 802.2 and IEEE802.3 for reliable data delivery between hosts on the network.(Noonan and Dobrawsky 2006) 2.2.4 TRANSMISSION CONTROL PROTOCOL (TCP) TCP is a standard protocol which is connection-oriented transport mechanism that operates at the transport layer of OSI model. It is described by the Request for Comment (RFC) 793. TCP solves the unreliability problem of the network layer protocol (IP) by making sure packets are reliably and accurately transmitted, errors are recovered and efficiently monitors flow control between hosts across the network. (Abie 2000; Noonan and Dobrawsky 2006; Simoneau 2006). The primary objective of TCP is to create session between hosts on the network and this process is carried out by what is called TCP three-way handshake. When using TCP for data transmission between hosts, the sending host will first of all send a synchronise (SYN) segment to the receiving host which is first step in the handshake. The receiving host on receiving the SYN segment reply with an acknowledgement (ACK) and with its own SYN segment and this form the second part of the handshake. The final step of the handshake is the n completed by the sending host responding with its own ACK segment to acknowledge the acceptance of the SYN/ACK. Once this process is completed, the hosts then established a virtual circuit between themselves through which the data will be transferred (Noonan and Dobrawsky 2006). As good as the three ways handshake of the TCP is, it also has its short comings. The most common one being the SYN flood attack. This form of attack occurs when the destination host such as the Server is flooded with a SYN session request without receiving any ACK reply from the source host (malicious host) that initiated a SYN session. The result of this action causes DOS attack as destination host buffer will get to a point it can no longer take any request from legitimate hosts but have no other choice than to drop such session request (Noonan and Dobrawsky 2006). 2.2.5 USER DATAGRAM PROTOCOL (UDP) UDP unlike the TCP is a standard connectionless transport mechanism that operates at the transport layer of OSI model. It is described by the Request for Comment (RFC) 768 (Noonan and Dobrawsky 2006; Simoneau 2006). When using UDP to transfer packets between hosts, session initiation, retransmission of lost or damaged packets and acknowledgement are omitted therefore, 100 percent packet delivery is not guaranteed (Sundararajan et al 2006; Postel 1980). UDP is designed with low over head as it does not involve initiation of session between hosts before data transmission starts. This protocol is best suite for small data transmission (Noonan and Dobrawsky 2006). 2.2.6 INTERNET CONTROL MESSAGE PROTOCOL (ICMP). ICMP is primarily designed to identify and report routing error, delivery failures and delays on the network. This protocol can only be used to report errors and can not be used to make any correction on the identified errors but depend on routing protocols or reliable protocols like the TCP to handle the error detected (Noonan and Dobrawsky 2006; Dunkels 2003). ICMP makes use of the echo mechanism called Ping command. This command is used to check if the host is replying to network traffic or not (Noonan and Dobrawsky 2006; Dunkels 2003). 2.3 OTHER NETWORK SECURITY ELEMENTS. 2.3.1 VIRTUAL PRIVATE NETWORK (VPN) VPN is one of the network security elements that make use of the public network infrastructure to securely maintain confidentiality of information transfer between hosts over the public network (Bou 2007). VPN provides this security features by making use of encryption and Tunneling technique to protect such information and it can be configured to support at least three models which are Remote- access connection. Site-to-site ( branch offices to the headquarters) Local area network internetworking (Extranet connection of companies with their business partners) (Bou 2007). 2.3.2 VPN TECHNOLOGY VPN make use of many standard protocols to implement the data authentication (identification of trusted parties) and encryption (scrambling of data) when making use of the public network to transfer data. These protocols include: Point-to-Point Tunneling Protocol PPTP [RFC2637] Secure Shell Layer Protocol (SSL) [RFC 2246] Internet Protocol Security (IPSec) [RFC 2401] Layer 2 Tunneling Protocol (L2TP) [RFC2661] 2.3.2.1 POINT-TO-POINT TUNNELING PROTOCOL [PPTP] The design of PPTP provides a secure means of transferring data over the public infrastructure with authentication and encryption support between hosts on the network. This protocol operates at the data link layer of the OSI model and it basically relies on user identification (ID) and password authentication for its security. PPTP did not eliminate Point-to-Point Protocol, but rather describes better way of Tunneling PPP traffic by using Generic Routing Encapsulation (GRE) (Bou 2007; Microsoft 1999; Schneier and Mudge 1998). 2.3.2.2 LAYER 2 TUNNELING PROTOCOL [L2TP] The L2TP is a connection-oriented protocol standard defined by the RFC 2661which merged the best features of PPTP and Layer 2 forwarding (L2F) protocol to create the new standard (L2TP) (Bou 2007; Townsley et al 1999). Just like the PPTP, the L2TP operates at the layer 2 of the OSI model. Tunneling in L2TP is achieved through series of data encapsulation of the different levels layer protocols. Examples are UDP, IPSec, IP, and Data-Link layer protocol but the data encryption for the tunnel is provided by the IPSec (Bou 2007; Townsley et al 1999). 2.3.2.3 INTERNET PROTOCOL SECURITY (IPSEC) [RFC 2401] IPSec is a standard protocol defined by the RFC 2401 which is designed to protect the payload of an IP packet and the paths between hosts, security gateways (routers and firewalls), or between security gateway and host over the unprotected network (Bou 2007; Kent and Atkinson 1998). IPSec operate at network layer of the OSI model. Some of the security services it provides are, authentication, connectionless integrity, encryption, access control, data origin, rejection of replayed packets, etc (Kent and Atkinson 1998). 2.3.3.4 SECURE SOCKET LAYER (SSL) [RFC 2246] SSL is a standard protocol defined by the RFC 2246 which is designed to provide secure communication tunnel between hosts by encrypting hosts communication over the network, to ensure packets confidentiality, integrity and proper hosts authentication, in order to eliminate eavesdropping attacks on the network (Homin et al 2007; Oppliger et al 2008). SSL makes use of security elements such as digital certificate, cryptography and certificates to enforce security measures over the network. SSL is a transport layer security protocol that runs on top of the TCP/IP which manage transport and routing of packets across the network. Also SSL is deployed at the application layer OSI model to ensure hosts authentication (Homin et al 2007; Oppliger et al 2008; Dierks and Allen 1999). 2.4 FIREWALL BACKGROUND The concept of network firewall is to prevent unauthorised packets from gaining entry into a network by filtering all packets that are coming into such network. The word firewall was not originally a computer security vocabulary, but was initially used to illustrate a wall which could be brick or mortar built to restrain fire from spreading from one part of a building to the other or to reduce the spread of the fire in the building giving some time for remedial actions to be taken (Komar et al 2003). 2.4.1BRIEF HISTORY OF FIREWALL Firewall as used in computing is dated as far back as the late 1980s, but the first set of firewalls came into light sometime in 1985, which was produced by a Ciscos Internet work Operating System (IOS) division called packet filter firewall (Cisco System 2004). In 1988, Jeff Mogul from DEC (Digital Equipment Corporation) published the first paper on firewall. Between 1989 and 1990, two workers of the ATT Bell laboratories Howard Trickey and Dave Persotto initiated the second generation firewall technology with their study in circuit relays called Circuit level firewall. Also, the two scientists implemented the first working model of the third generation firewall design called Application layer firewalls. Sadly enough, there was no published documents explaining their work and no product was released to support their work. Around the same year (1990-1991), different papers on the third generation firewalls were published by researchers. But among them, Marcus Ranums work received the most attention in 1991 and took the form of bastion hosts running proxy services. Ranums work quickly evolved into the first commercial product—Digital Equipment Corporations SEAL product (Cisco System 2004). About the same year, work started on the fourth generation firewall called Dynamic packet filtering and was not operational until 1994 when Check Point Software rolled out a complete working model of the fourth generation firewall architecture. In 1996, plans began on the fifth generation firewall design called the Kernel Proxy architecture and became reality in 1997 when Cisco released the Cisco Centri Firewall which was the first Proxy firewall produced for commercial use (Cisco System 2004). Since then many vendor have designed and implemented various forms of firewall both in hardware and software and till date, research works is on going in improving firewalls architecture to meet up with ever increasing challenges of network security. 2.5 DEFINITION According to the British computer society (2008), Firewalls are defence mechanisms that can be implemented in either hardware or software, and serve to prevent unauthorized access to computers and networks. Similarly, Subrata, et al (2006) defined firewall as a combination of hardware and software used to implement a security policy governing the flow of network traffic between two or more networks. The concept of firewall in computer systems security is similar to firewall built within a building but differ in their functions. While the latter is purposely designed for only one task which is fire prevention in a building, computer system firewall is designed to prevent more than one threat (Komar et al 2003).This includes the following Denial Of Service Attacks (DoS) Virus attacks Worm attack. Hacking attacks etc 2.5.1 DENIAL OF SERVICE ATTACKS (DOS) â€Å"Countering DoS attacks on web servers has become a very challenging problem† (Srivatsa et al 2006). This is an attack that is aimed at denying legitimate packets to access network resources. The attacker achieved this by running a program that floods the network, making network resources such as main memory, network bandwidth, hard disk space, unavailable for legitimate packets. SYN attack is a good example of DOS attacks, but can be prevented by implementing good firewall polices for the secured network. A detailed firewall policy (iptables) is presented in chapter three of this thesis. 2.5.2 VIRUS AND WORM ATTACKS Viruses and worms attacks are big security problem which can become pandemic in a twinkle of an eye resulting to possible huge loss of information or system damage (Ford et al 2005; Cisco System 2004). These two forms of attacks can be programs designed to open up systems to allow information theft or programs that regenerate themselves once they gets into the system until they crashes the system and some could be programmed to generate programs that floods the network leading to DOS attacks. Therefore, security tools that can proactively detect possible attacks are required to secure the network. One of such tools is a firewall with good security policy configuration (Cisco System 2004). Generally speaking, any kind of firewall implementation will basically perform the following task. Manage and control network traffic. Authenticate access Act as an intermediary Make internal recourses available Record and report event 2.5.3 MANAGE AND CONTROL NETWORK TRAFFIC. The first process undertaken by firewalls is to secure a computer networks by checking all the traffic coming into and leaving the networks. This is achieved by stopping and analysing packet Source IP address, Source port, Destination IP address, Destination port, IP protocol Packet header information etc. in order decide on what action to take on such packets either to accept or reject the packet. This action is called packet filtering and it depends on the firewall configuration. Likewise the firewall can also make use of the connections between TCP/IP hosts to establish communication between them for identification and to state the way they will communicate with each other to decide which connection should be permitted or discarded. This is achieved by maintaining the state table used to check the state of all the packets passing through the firewall. This is called stateful inspection (Noonan and Dobrawsky 2006). 2.5.4 AUTHENTICATE ACCESS When firewalls inspects and analyses packets Source IP address, Source port, Destination IP address, Destination port, IP protocol Packet header information etc, and probably filters it based on the specified security procedure defined, it does not guarantee that the communication between the source host and destination host will be authorised in that, hackers can manage to spoof IP address and port action which defeats the inspection and analysis based on IP and port screening. To tackle this pit fall over the network, an authentication rule is implemented in firewall using a number of means such as, the use of username and password (xauth), certificate and public keys and pre-shared keys (PSKs).In using the xauth authentication method, the firewall will request for the source host that is trying to initiate a connection with the host on the protected network for its username and password before it will allow connection between the protected network and the source host to be establi shed. Once the connection is been confirmed and authorised by the security procedure defined, the source host need not to authenticate itself to make connection again (Noonan and Dobrawsky 2006). The second method is using certificates and public keys. The advantage of this method over xauth is that verification can take place without source host intervention having to supply its username and password for authentication. Implementation of Certificates and public keys requires proper hosts (protected network and the source host) configuration with certificates and firewall and making sure that protected network and the source host use a public key infrastructure that is properly configured. This security method is best for big network design (Noonan and Dobrawsky 2006). Another good way of dealing with authentication issues with firewalls is by using pre-shared keys (PSKs). The implementation of PSKs is easy compare to the certificates and public keys although, authentication still occur without the source host intervention its make use of an additional feature which is providing the host with a predetermined key that is used for the verification procedure (Noonan and Dobrawsky 2006). 2.5.5 ACT AS AN INTERMEDIARY When firewalls are configured to serve as an intermediary between a protected host and external host, they simply function as application proxy. The firewalls in this setup are configured to impersonate the protected host such that all packets destined for the protected host from the external host are delivered to the firewall which appears to the external host as the protected host. Once the firewalls receive the packets, they inspect the packet to determine if the packet is valid (e.g. genuine HTTT packet) or not before forwarding to the protected host. This firewall design totally blocks direct communication between the hosts. 2.5.6 RECORD AND REPORT EVENTS While it is good practise to put strong security policies in place to secure network, it is equally important to record firewalls events. Using firewalls to record and report events is a technique that can help to investigate what kind of attack took place in situations where firewalls are unable to stop malicious packets that violate the access control policy of the protected network. Recording this event gives the network administrator a clear understanding of the attack and at the same time, to make use of the recorded events to troubleshoot the problem that as taken place. To record these events, network administrators makes use of different methods but syslog or proprietary logging format are mostly used for firewalls. However, some malicious events need to be reported quickly so that immediate action can be taken before serious damage is done to the protected network. Therefore firewalls also need an alarming mechanism in addition to the syslog or proprietary logging format whe n ever access control policy of the protected network is violated. Some types of alarm supported by firewalls include Console notification, Simple Network Management Protocol (SNMP), Paging notification, E-mail notification etc (Noonan and Dobrawsky 2006). Console notification is a warning massage that is presented to the firewall console. The problem with this method of alarm is that, the console needs to be monitored by the network administrator at all times so that necessary action can be taken when an alarm is generated. Simple Network Management Protocol (SNMP) notification is implemented to create traps which are transferred to the network management system (NMS) monitoring the firewall. Paging notification is setup on the firewall to deliver a page to the network administrator whenever the firewall encounters any event. The message could be an alphanumeric or numeric depending on how the firewall is setup. E-mail notification is similar to paging notification, but in this case, the firewall send an email instead to proper address. 2.6 TYPES OF FIREWALLS Going by firewall definition, firewalls are expected to perform some key functions like, Application Proxy, Network Translation Address, and Packet filtering. 2.6.1 APPLICATION PROXY This is also known as Application Gateway, and it acts as a connection agent between protected network and the external network. Basically, the application proxy is a host on the protected network that is setup as proxy server. Just as the name implies, application proxy function at the application layer of the Open System Interconnection (OSI) model and makes sure that all application requests from the secured network is communicated to the external network through the proxy server and no packets passes through from to external network to the secured network until the proxy checks and confirms inbound packets. This firewall support different types of protocols such as a Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP) and Simple Mail Transport Protocol (SMTP) (Noonan and Dobrawsky 2006; NetContinuum 2006). 2.6.2 NETWORK ADDRESS (NAT) NAT alter the IP addresses of hosts packets by hiding the genuine IP addresses of secured network hosts and dynamically replacing them with a different IP addresses (Cisco System 2008; Walberg 2007). When request packets are sent from the secured host through the gateway to an external host, the source host address is modified to a different IP address by NAT.  When the reply packets arrives at the gateway, the NAT then replaces the modified address with genuine host address before forwarding it to the host (Walberg 2007).The role played by NAT in a secured network system makes it uneasy for unauthorized access to know: The number of hosts available in the protected network The topology of the network The operating systems the host is running The type of host machine (Cisco System 2008). 2.6.3 PACKET FILTERING. â€Å"Firewalls and IPSec gateways have become major components in the current high speed Internet infrastructure to filter out undesired traffic and protect the integrity and confidentiality of critical traffic† (Hamed and Al-Shaer 2006). Packet filtering is based on the lay down security rule defined for any network or system. Filtering traffic over the network is big task that involves comprehensive understanding of the network on which it will be setup. This defined policy must always be updated in order to handle the possible network attacks (Hamed and Al-Shaer 2006). 2.6.4 INSTRUCTION DETECTION SYSTEMS. Network penetration attacks are now on the increase as valuable information is being stolen or damaged by the attacker. Many security products have been developed to combat these attacks. Two of such products are Intrusion Prevention systems (IPS) and Intrusion Detection Systems (IDS). IDS are software designed to purposely monitor and analysed all the activities (network traffic) on the network for any suspicious threats that may violate the defined network security policies (Scarfone and Mell 2007; Vignam et al 2003). There are varieties of methods IDS uses to detect threats on the network, two of them are, anomaly based IDS, and signature based IDS. 2.6.4.1 ANOMALY BASED IDS Anomaly based IDS is setup to monitor and compare network events against what is defined to be normal network activities which is represented by a profile, in order to detect any deviation from the defined normal events. Some of the events are, comparing the type of bandwidth used, the type of protocols etc and once the IDS identifies any deviation in any of this events, it notifies the network administrator who then take necessary action to stop the intended attack (Scarfone and Mell 2007). 2.6.4.2 SIGNATURE BASED IDS Signature based IDS are designed to monitor and compare packets on the network against the signature database of known malicious attacks or threats. This type of IDS is efficient at identifying already known threats but ineffective at identifying new threats which are not currently defined in the signature database, therefore giving way to network attacks (Scarfone and Mell 2007). 2.6.5 INTRUSION PREVENTION SYSTEMS (IPS). IPS are proactive security products which can be software or hardware used to identify malicious packets and also to prevent such packets from gaining entry in the networks (Ierace et al 2005, Botwicz et al 2006). IPS is another form of firewall which is basically designed to detect irregularity in regular network traffic and likewise to stop possible network attacks such as Denial of service attacks. They are capable of dropping malicious packets and disconnecting any connection suspected to be illegal before such traffic get to the protected host. Just like a typical firewall, IPS makes use of define rules in the system setup to determine the action to take on any traffic and this could be to allow or block the traffic. IPS makes use of stateful packet analysis to protect the network. Similarly, IPS is capable of performing signature matching, application protocol validation etc as a means of detecting attacks on the network (Ierace et al 2005). As good as IPS are, they also have t heir downsides as well. One of it is the problem of false positive and false negative. False positive is a situation where legitimate traffic is been identified to be malicious and thereby resulting to the IPS blocking such traffic on the network. False negative on the other hand is when malicious traffic is be identified by the IPS as legitimate traffic thereby allowing such traffic to pass through the IPS to the protected network (Ierace N et al 2005). 2.7 SOFTWARE AND HARDWARE FIREWALLS 2.7.1 SOFTWARE FIREWALLS Software-based firewalls are computers installed software for filtering packets (Permpootanalarp and Rujimethabhas 2001). These are programs setup either on personal computers or on network servers (Web servers and Email severs) operating system. Once the software is installed and proper security polices are defined, the systems (personal computers or servers) assume the role of a firewall. Software firewalls are second line of defence after hardware firewalls in situations where both are used for network security. Also software firewalls can be installed on different operating system such as, Windows Operating Systems, Mac operating system, Novel Netware, Linux Kernel, and UNIX Kernel etc. The function of these firewalls is, filtering distorted network traffic. There are several software firewall some of which include, Online Armor firewall, McAfee Personal Firewall, Zone Alarm, Norton Personal Firewall, Black Ice Defender, Sygate Personal Firewall, Panda Firewall, The DoorStop X Fi rewall etc (Lugo Parker 2005). When designing a software firewall two keys things are considered. These are, per-packet filtering and a per-process filtering. The pre-packet filter is design to search for distorted packets, port scan detection and checking if the packets are accepted into the protocol stack. In the same vein, pre-process filter is the designed to check if a process is allowed to begin a connection to the secured network or not (Lugo and Parker 2005). It should be noted that there are different implantations of all Firewalls. While some are built into the operating system others are add-ons. Examples of built-in firewalls are windows based firewall and Linux based. 2.7.2 WINDOWS OPERATING SYSTEM BASED FIREWALL. In operating system design, security features is one important aspect that is greatly considered. This is a challenge the software giant (Microsoft) as always made sure they implement is their products. In the software industry, Mi

Friday, October 25, 2019

We Must Make Changes in AIDS Education :: Argumentative Persuasive Essays

We Must Make Changes in AIDS Education Due to the fervent efforts of health educators, young people today have a very intimate knowledge of HIV and AIDS. These students were born in the early eighties at the beginning of the AIDS epidemic. Teachers guided students through years of health classes in their junior high and high school years and informed students about the destructive nature of the AIDS virus and ways in which it can and cannot be contracted. Health educators made sure that students were well-informed about HIV and presented the topic as being gender neutral. Although pop culture and the media claimed that homosexual males were responsible for the epidemic, this idea was never presented in the classroom. Though I am grateful for this aspect of AIDS education, it seems that there was an important aspect missing from the curriculum: the more numerous negative effects that the disease has for women. Health education needs to present the effects of AIDS to women and encourage them to be more concerned about contr acting and living with the disease. In spite of this need for reform, however, health educators may feel uneasy about changing their curriculum and argue that there are a number of reasons to keep HIV and AIDS curriculum the same. One reason that they might have for maintaining the current curriculum is that they fear that presenting HIV as more of a woman’s issue could decrease awareness of the disease in men. However, this probably will not happen. Many people, though not necessarily health educators, already view HIV as more of a man’s disease. In fact, according to Allen E. Carrier of Aids Project Los Angeles, gay men aged 17-24 are at a very high risk for HIV infection and realize the dangers of unsafe sex but continue to engage in high-risk behavior (DeNoon "National"). In other words, most men are aware and informed but some are choosing to ignore some of the education that they received. In reality, men need to make as many changes as women in order to stop the AIDS epidemic. Peter Piot, the execu tive director of the Joint United Nations Program on HIV/AIDS, says that "[m]en have a crucial role to play in bringing about this radical change" (Henderson). Therefore, the new AIDS curriculum would be encouraging both men and women to change their attitudes and actions in order to bring about changes.

Thursday, October 24, 2019

A Look At Greek Lyric Poetry And John Cage Essay

Music goes beyond language barriers; it speaks no language but that of the heart. However, like all art forms it has tenets and principles as to what is good music and what is simply noise. How about when artists claim that their works are music when it seems that these are perceived to be avant garde, not the kind of music that dominates the cultural period and worse, does not come from tradition? This paper seeks to take a look at the music in Hellenistic Greece, in particular a lyric by one of its known muses, Sappho, with her only surviving complete work, Ode to Aphrodite, and compare it with what is considered to be experimental composition from John Cage, his 4’33†. Both pieces were meant to be performed – although how these are performed also raised questions. Ancient Greece is revered to be a center of learning, where arts and culture flourished. It was one of the places where the earliest treatises on the different art forms were written, and they were keen to what constituted good and bad art, giving raise even to debates as to what is the function of art. Plato was known to promote the arts that will inspire people’s thinking, not their emotions, for he considered human emotions a weakness, and also because during that time musical scales developed from the study of the harmony in the universe, the mathematical equations used by the Pythagoreans (Henderson, 1957). It was because of this that he did not approve of the poets’ lyrics, because it deviated from the musical modes they were used to and relied on what sounded good to the ear, making music became accessible to the people (Anderson, 1966). Sappho was one of those poets whose lyric poetry when sung communicated the love and sensuality it contained, as with her work Ode to Aphrodite, deviating from their traditionally highly mathematically composed melodies where people were supposed to be quiet and listen to rigidly, for her lyric love poems were made to be felt and inspire emotion. In this way, Sappho, and her contemporary poets at the time helped create a turn for Greek music. Like Sappho, John Cage contributed to music with his compositions, characterized as avant-garde especially his chance pieces. However, his work that challenged perceptions and definition of music is his notorious 4’33†, a piece where for four minutes and thirty-three seconds the orchestra plays nothing. John Cage wrote this piece when he realized that there will always be sound, and deliberately wrote â€Å"Tacet†, to instruct the musician not to play. What Cage wanted for the audience to hear was the different sounds that occur during the interval the piece is played – all the various sounds that one does not pay attention to because they listen to something else. This is different from silence, unless the figuratively the sound of silence, since Cage’s point was that there is always sound if one listens intently (Cage, 1973). Both Sappho and Cage’s music differed from one another in that Sappho was expressing herself through her poetry, while Cage was making the listener turn to his environment. Although created in different environment and cultures, both musical pieces can be interpreted in a personal way, making it a unique experience. Sappho’s Ode to Aphrodite can mean something else to a modern listener than it used to in ancient Greece, and of course Cage’s 4’33† would always conjure something unique for each individual. What this shows us is that although music is made in a certain era, it can transcend the boundaries of time as long as it resonates with what is human and universal, as an appreciation for the sounds around us and those that speak of love, and that although music is governed by principles of what makes it good, it will always be a matter of personal experience. SOURCES: Anderson, W. (1966). Ethos and Education in Greek Music. Cambridge, HUP. Cage, John. (1973). Silence: Lectures and Writings, Wesleyan Paperback. Henderson, Isobel (1957). â€Å"Ancient Greek Music† in The New Oxford History of Music, vol. 1: Ancient and Oriental Music,† Oxford, Oxford University Press. http://homoecumenicus. com/ioannidis_ancient_greek_texts. htm, Accessed on June, 15, 2009. http://www. greylodge. org/occultreview/glor_013/433. htm , Accessed on June 15, 2009.

Wednesday, October 23, 2019

A Reflective Observation on Global Warming

Elizabeth Kolbert’s chapter 2 entitled â€Å"A Warmer Sky† in her book â€Å"Field Notes From A Catastrophe† is basically about the discovery of global warming and the developments in its awareness. It also shows relevant data about certain factors that affect global warming.John Tyndall’s discovery of the ratio spectrophotometer in 1859 was the advent of the awareness in global warming.The function of the said device is to differentiate absorbance and transmittance of their radiation exhibited by the gases. Results of the tests showed that the gases commonly found in the air such as nitrogen and oxygen did not absorb nor transmit any radiation. However, other gases such as carbon dioxide and water absorbed visible and infrared radiation (p.36).With these results, Tyndall stumbled upon a baffling and shocking truth that will cause a worldwide sensation and concern in the following generations. Tyndall concluded that these gases contribute largely to the wa y the earth radiates and absorbs radiation from the sun. He thought of the atmosphere as a barrier that regulates the amount of radiation that enters the earth which affects its overall temperature. This notion was later known as the â€Å"natural greenhouse effect† (p.36).The sun, earth and many hot bodies emit radiation and the amount of radiation is directly proportional to its temperature. This is further explained by the Stefan-Boltzmann Law which sates that the temperature raised to the fourth power is directly proportional to the radiationemitted by the body. The role of the greenhouse gases is to absorb selectively the radiation from the sun and allow visible radiation to penetrate the atmosphere. The earth’s infrared radiation, on the other hand, is absorbed by the greenhouse gases and is emitted partially into space and partially back to earth.This phenomenon regulates the temperature on the surface of the earth. After Tyndall passed away from an overdose of a sleeping drug, Arrhenius continued what Tyndall left unfinished. Arrhenius studied the effects of altering carbon dioxide levels in the atmosphere and he found out that rising carbon dioxide levels will increase the earth’s temperature, hence, he coined the phrase â€Å"to live under a warmer sky† to the next generations (p.42)Interest in the climate change mellowed down after the death of Arrhenius. However, in the mid 1950’s, there was a rebirth in the awareness of global warming and this was due to Charles David Keeling, a chemist. The results of his research in the atmospheric carbon dioxide levels in the atmosphere or the â€Å"Keeling Curve† showed that the carbon dioxide level increases as time increases.The results were devastating as years pass by. The Keeling curve also showed that the carbon dioxide level in 2005 was 375 parts per million and with this terrifying rate, it will increase to 500 parts per million by the middle of the century whi ch will greatly affect the temperature of the earth and will make us feel the full effects of global warming (p.44)Global warming threatens us to extinction. This is caused mainly by industrialization and we must stop, or if not, control the rise of carbon dioxide levels in the atmosphere to save the future generations. Global warming will cause the polar and ice glaciers to melt that constitute to a rise in sea level. This rise will flood coastal regions and other land masses. There is also an expected change of rainfall patterns across the globe that will greatlyaffect food crops and will be a major setback in food production in many nations. With the increase in temperature, plants and animals will be forced to live in cooler areas and those who are unable to adapt will be doomed of extinction.   (Global Warming, Encarta)

Tuesday, October 22, 2019

Round

Round Round Round By Maeve Maddox The word round is the ideal word to illustrate the fact that a word is not a part of speech until it is used in a sentence. Of the eight classic parts of speech–noun, verb, adjective, adverb, preposition, conjunction, pronoun, and interjection–round can function as five of them. 1. Round as Noun We speak of a round of golf and the rounds of a boxing match. We sing musical rounds like â€Å"Row, Row, Row Your Boat† and â€Å"Frere Jacques.† Shakespeare spoke of a king’s crown as â€Å"a golden round.† The steps of a ladder are called rounds. The creed of the United States Postal Service, translated from Herodotus, declares, â€Å"Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds.† Here are some more common meanings of round as a noun: a large piece of beef a slice of bread, especially toast a regularly recurring sequence the constant passage and recurrence of days the act of ringing a set of bells in sequence a circular route a regular visit by a doctor or a nurse in a hospital a set of drinks bought for all the people in a group an amount of ammunition needed to fire one shot. a single volley of fire by artillery an outburst of applause a period or bout of play at a game or sport a division of a game show a session of meetings for discussion 2. Round as Adjective Anything that is spherical in shape may be described as round, for example, balls marbles, oranges, and grapes. Also round are cake pans, plates, Frisbees, wheels, CDs, and bagels. Vowels can be round, (i.e., enunciated by contracting the lips to form a circular shape.) Applied to a quantity of something, round can mean large or considerable: â€Å"A million dollars is a good round sum.† But applied to an estimate, round means rough or approximate: â€Å"The figure of three thousand years was only a round guess.† Shakespeare and his contemporaries frequently used round in the sense of outspoken: â€Å"Sir Toby, I must be round with you.† Horses can trot at â€Å"a good round pace,† and scholars often have â€Å"round shoulders.† 3. Round as Verb You can round a piece of clay into a ball, round the edges of a table, round the bases, round chickens into a corner, round out your gnome collection, round a number, and round suddenly on someone who has been annoying you. 4. Round as Adverb and Preposition These uses of round are more common in British usage than in American: â€Å"When the door slammed, everyone turned round.† (adverb) â€Å"At last, the bus came round the corner.† (preposition) See Round vs. Around for a discussion of these two uses of round. Want to improve your English in five minutes a day? Get a subscription and start receiving our writing tips and exercises daily! Keep learning! Browse the Vocabulary category, check our popular posts, or choose a related post below:Fly, Flew, (has) FlownFlied?"Replacement for" and "replacement of"English Grammar 101: Sentences, Clauses and Phrases

Sunday, October 20, 2019

Refugees essays

Refugees essays Good morning everyone. Front-page news: political decisions, humanitarian aid and asylum. These events are taking place in many parts of the world and are about refugees. Why are there so many refugees? What are the different views on the subject? Who will help them? Will you? Australia defines a refugee as a person who fears persecution for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his/her nationality and is unable, or owing to such fear, is unwilling to avail himself/herself of the protection of that country. Last year fourteen and a half million people sought refuge. Afghanistan is a country with a large number of refugees. As we all would know now, America is bombing Afghanistan because of terrorism and its Taliban regime. Afghanistan is a very poor, war torn country that is subject to severe drought. For this reason people left their homes in search of food. Africa is a continent that has suffered the world's most invasive violence during 2000 and in early 2001. Nearly 3 million Africans became new refugees or were newly displaced within their own countries during 2000. Reasons for this great number of refugees include war, repression, civil unrest, and politically induced humanitarian emergencies. Other countries with a large number of refugees include Columbia, Sudan, Sierra Leone, Indonesia, Eritrea, Ethiopia, The Balkans, Chechnya and Liberia. The cause of this problem includes war, corrupt governments, civil unrest and poverty. For these reasons the people of these countries need to find protection and basic needs to survive. The refugee crisis in Australia has caused much debate. The viewpoints on the issue include, supporting the government that dont want illegal immigration or taking the humanitarian view that we should let the boat people come to Australia. The government does not w ...

Saturday, October 19, 2019

Characteristic Of The American Nation History Essay

Characteristic Of The American Nation History Essay The United States are different from the rest of the world in many aspects, and Americans themselves like to emphasize their uniqueness. Many books, introductions to cultural studies, manuals, textbooks, dictionaries, guides, articles, and essays have been written with one common aim. They all have tried to distinguish and call the American distinctness by real names, as well as they have attempted to explain why Americans are such an exceptional nation. This thesis is also one of the efforts to objectify a rather complicated jigsaw of the American character. In total amount of four chapters a complex portrait of an American will be offered. To start a research which quests a current form of any culture it is important to look firstly into its past. America may not have long history like English or Italian but still approximately 200 years of self-selective immigration were enough to set very clear distinctiveness typical for the United States. The first chapter of this thesis will attempt to point out various occurrences since the foundation of the first permanent settlement in the North America till 1776. Two greatest foreign traveler publications by Alexis de Tocqueville and J. Hector St. John de Crevecoeur that contributed to the development of the national pride will be mentioned. These two historical sources will be compared with current literature and it will be observed whether they differ or not. Finally, it will be dealt with the proportions of European immigrants and how they helped to change the portrait of the American nation. The following chapter will continue on approximation of the American differentness by portraying the system of values. First of all, it will be clarified what is regarded as a value because traditionally more than one definition of this term occurs. It will be proven that values function like dominant pillars on which the structure of the American character has been built. Quite much attention will be paid to values like wor k, achievement, or equality because these values have their historical background and are still reflected not even in American behavior but also in stereotypes common about American citizens. The third part will be devoted to religion in the United States. This topic is purposely not attached to the chapter about values because as it will be explained, religion is traditionally not being mentioned as a value. What is more, religion will be portrayed as an independent factor touching different beliefs of common people but also like a factor contrasting to the secularity of the state. The very last and rather shorter chapter will comment on stereotypes and prejudices which often do not provide a very objective picture of the United States. Attention will be also paid to the notably higher number of American stereotypes in comparison to other countries. Finally some examples of individual stereotypes will be provided and by these means the picture of the American nation will be conclud ed. americans in terms of the historical development Ever since America has been discovered, especially the North America, it has represented an object of fascination to observers from other countries who have been trying to solve the question of American nationality. The quest of the American   [ 1 ]   national identity, and who or what is considered to be American is perennial. It is regarded as a common knowledge that the US is primarily and undoubtedly a country of immigrants. According to American historian John Harmon McElroy, more than 55 million immigrants have arrived into America in the last four centuries. Such a high number represents the largest movement of people flowing into a certain place or a country in the history of mankind (60).

Friday, October 18, 2019

Market Plan for Toyota Camry Term Paper Example | Topics and Well Written Essays - 2500 words

Market Plan for Toyota Camry - Term Paper Example This term paper describes Toyota Camry as a product of the Toyota corporation and analyzes it's marketing strategies that were used by the corporation to promote the product. Toyota Camry is one of the innovative hybrid vehicles of Toyota which brings excellent benefits to the company. Though Toyota faces many challenges in their business that were compared but they successfully tackle the challenges through good marketing research and strategies used. The researcher states that Toyota uses all possible efforts to make Camry as well as other products to become successful in vehicle industry. For promoting their products Toyota company use many sales promotion campaigns that were mentioned in the term paper with an aim to build and sustain good relationship with customers and clients. One of the benefits mentioned by the researcher was that Toyota provides free check up and service to customers and financial support from bank to help customers purchase car easily. Toyota also targets its customer by running advertising campaign on various automotive websites, that are perfectly designed and developed to create a great impression on the possible customer. Toyota had developed java–based advertising in the ‘Annual Nationwide Clearance Event’ in the year 2002. In conclusion, Toyota Corporation always tries to make certain creative design and use modern technology in their vehicle to influence potential customers to buy their products and make products by considering the needs and preferences of customers.

Analysis Essay Example | Topics and Well Written Essays - 500 words - 11

Analysis - Essay Example Verification of requirements will ensure that people define them correctly. This will imply that they will be of acceptable quality. The institute should ensure that the management effectively revises the various requirements that defective. The management should assign a business analyst the role of ensuring that the requirements are ready for review by customers. They should also ensure that they contain all information that workers require for further work (Carkenord 36). The institute should verify both input and output requires for efficient results to attain efficient results. The main purpose of validate requirements is to ensure that the various requirements support delivery of value to an entity, fulfill its goals, and meet the needs of stakeholders. Brisbane Institute of Art (BIA) should validate requirements to ensure that stakeholders, solution, and transition requirements are in line with the requirements of business. The management should come up with assumptions concerning customers and stakeholders response on the services they offer. This will enable them to acquire vital information concerning the introduction of the unprecedented product or service (Carkenord 42). The institute should define an evaluation criterion that is measurable. The evaluation criteria should show whether the resulting change is successful. This criterion will indicate performance and thus ensuring that one chooses an appropriate criterion. This refers to the value that a solution delivers and which meets the scope of solution. In case a solution does not give either direct or indirect value to stakeholders, then they should eliminate it. There are requirements that have value to stakeholders and not desirable part of a solution. The management of the institution should consider the opportunity cost that would arise in investing in this institution. Opportunity cost is the benefit that one accrues as a

Professional Article Review Essay Example | Topics and Well Written Essays - 750 words

Professional Article Review - Essay Example The present study aims to investigate the long term impacts of MPH on ADHD affected children and comparative affect of academic intervention along with other covariates age, sex, IQ etc. The study involved 85 children with ADHD and within the age group of 5-12 years. Baseline assessments included Wide range achievement test-revised (WRAT-R), parent and teacher rating of ADHD symptoms and academic achievement, estimated intellectual ability, OCHS academic and psychosocial ratings, duration of medication and academic support. Post baseline assessments, children were randomly assigned to MPH treatment and placebo group in a double blind trial, the treatment group administered with a gradually rising dose of 5mg/administration to reach a target dose of 0.7mg/Kg body weight. Treatment was followed for 12 months keeping other conditions uniform, and the assessments done at baseline were repeated after the 12 month treatment period. Regression analysis was done to estimate academic performa nce one for each subset of WRAT-R and for parent and teacher ratings with baseline covariates and total treatments as variables. The results indicated that neither medication nor academic interventions could be attributed to significant improvement in academic performance compared to baseline values. Critical Evaluation Studies on ADHD lack evidences of mechanism of associations between academic underachievement and ADHD and stimulants have been recommended based on short term trials showing positive impacts on symptoms in general. However, the authors rightly claim that data for long term and cumulative impact of MPH is unavailable. The procedure followed by the authors is exhaustive involving baseline and post treatment assessments which are both subjective as well as objective. The explanations to the assessments, their design are either complete or are properly referenced, so as to enable repeatability. Both WISC-R and Ontario Child Health Scale (OCHS) are established clinical t ool for IQ assessment of children with learning disabilities and ADHD. Objective ratings make the results easy to conduct and results specific. Crossing over among children from the placebo and treatment groups was allowed but records were maintained. These records helped in estimating the total time of medication which represented cumulative effects of MPH, and the medication status at end of trial was indicative of the current effect. The continuation of additional interventions in form of academic support in addition to randomization of parents to training or self help group along with cross design, ensured that the trial were highly naturalistic. While allowing for naturalistic design lead to lack of control on some important variables however the same was partially overcome by use of multiple regression analysis. The nature of academic support provided to the children was one very important variable. Regression analysis used also helped to overcome the loss in numbers due to cr ossing over during the study. Disparities in the results for objective and subjective assessments indicate that the efficacy of MPH is based on prejudices and is overrated. The author claims that medication does not have

Thursday, October 17, 2019

Terrorism and patriot act Coursework Example | Topics and Well Written Essays - 500 words

Terrorism and patriot act - Coursework Example Over 3000 people lost their lives. While America was still recovering from the shock of this barbaric act on its home soil, President George W. Bush lost no time in pursuit of the culprits. He ordered airstrikes on likely hideouts of Osama bin Laden in Afghanistan. At the administrative level, he promulgated the Patriot Act 2001 and established the Department of Homeland Security to help deal with all further threats and protect the borders of the USA and its people. Some sweeping powers were given to these personnel to track, apprehend and arrest possible suspects who wanted to harm America and its interests. The extent of these powers is a matter of debate, as many opine that it violates the rights of privacy and freedom as guaranteed under the U.S Constitution (Worrall, 2011). The Patriot Act was signed into Law by President Bush on October 26, 2001, just over a month and a half after the events of September 11. The Act has 10 separate sections, one each relating to enhancing domestic security against terrorism, surveillance procedures, anti-money-laundering, removing obstacles to investigations, information sharing, criminal law, terrorism intelligence and border security. Many sections were due to sunset after four years, but they were extended by President Obama in the larger public interest (CLDC, 2012). Among the most contentious of the powers under Section 213 is that of arresting someone on mere suspicion of being a terrorist, and that of searching his or her house without a warrant. Section 218 allows for wiretapping of such suspect’s every means of communication. Under Section 805, anybody even suspected of giving advice or assistance to a terrorist would be liable for arrest and prosecution. Granted that we have to nip terrorism in the bud, but such measures are a close call to violating the privacy and integrity of American citizens and go against the widely held precept of ‘innocent till proven

MULTINATIONAL CORP-EVOL & CUR ISSUE Essay Example | Topics and Well Written Essays - 750 words - 2

MULTINATIONAL CORP-EVOL & CUR ISSUE - Essay Example (GOOG), Amazon (AMZN) and PowerShares QQQ Trust Series 1 (QQQ). This report entails a transaction involving 500 quantities of Google shares, 10000 quantity of Amazon and 1,000 quantity of PowerShares Trust Series 1. Google Inc. is a global technology company that mainly focuses on areas such as advertising, operating systems and platforms, enterprise and hardware products. Its main source of revenue is online advertisement. By the close of business on April 06, 2014 @ 3:59:59 PM, 500 shares of Google Inc. were selling at $545.25. This resulted to an amount of 573,000. The buying price of Google Inc. equity was $545 at a currency/exchange of USD/1.00. It is worth noting that price paid is quoted in the currency of the security’ exchange while the buying powers change and transaction amount are quoted in the currency of the portfolio. As per the start of the business day 7th April 2014, the share price for Google Inc. stood at $539.31 representing a price change of $-5.94 (-1.09%). At the current market price, buying 500 shares of Google Inc. will cost me 272,500. Selling the same quantity will get me a reward of 273,135 hence making a profit equivalent to $635. The profitability nature of the Google’s shares make motivated me to buy the portfolio. Amazon.com serves consumers through its retail websites and focus on selection, price, and convenience. It offers programs that enable sellers to sell their products through the company’s websites. Amazon offers its customers the lowest prices daily product pricing and shipping offers. The last buying price for Amazon.com is $320.22 as opposed to the current price of $320.52. The 52-week high is $408.06 while the 52-week low is $245.75. Going by the previous price, the estimated cost for 10,000 shares will be $ 3,203,310.00. The last selling of Amazon stock is $320.68 representing 187,268 volumes. Considering this selling price, the estimated cost stands at 3,197,090.00. This represents an income gain