Tuesday, December 31, 2019

Data Chirp Internet - Free Essay Example

Sample details Pages: 29 Words: 8658 Downloads: 3 Date added: 2017/06/26 Category Statistics Essay Did you like this example? Development of a Data Chirp Measurement Method Don’t waste time! Our writers will create an original "Data Chirp Internet" essay for you Create order Information and Communication Technology (ICT) In measuring data link quality for downloading and browsing services the final perceived quality by the user is determined by the session time. Measuring session times is time consuming because for each context, e.g. a small file download, a large file download or a browse session over a number of small Internet pages, one has to measure these times. Therefore data links are mostly characterized in terms of key performance indicators (kpis) like bandwidth, loss, delay etc, from which one tries to derive the session times. However the relations between these kpis and the session times are not clear and also measuring these kpis is very difficult. This report presents a method that characterizes a data link by a so-called delay finger print from which the session times can be predicted. The fingerprint is derived from a concatenated lumped UDP packet transfer (packets send immediately after each other) followed by a UDP stream of which the sending speed is increased continuously using a single packet transfer. This chirping approach causes self induced congestion and allows fingerprinting with only minimal loading of the system under test. In this contribution live networks as well as an internet simulator are used to create data links over a wide variety of conditions where both the data chirp fingerprint as well as the download/session times are measured. From the fingerprint a number of link indicators are derived that characterize the link in terms of kpis such as ping time, loaded bandwidth (congested), unloaded bandwidth (not congested), random packet loss etc. The fingerprint measurements allow predicting the service download/session times for small downloads and fast browsing with a correlation of around 0.92 for the simulated links. For large file downloads and large browse sessions no acceptable prediction model could be constructed. Introduction Multimedia applications are playing an important role in recent years in everyday life. On the Internet, apart from the widely used Hypertext Transfer protocol (HTTP), many Real Time applications are contributing significantly to the overall traffic load of the network. The state-of-the-art Information and Communication Technology (ICT) concepts enable deployment of various service and applications in various arenas like home entertainment, offices, operations, banking etc. The backbone of all these distributed services is the core network which facilitates data communication. The quality of the available network connections will often have a large impact on the performance of distributed applications. For example, response time of the requested document using World Wide Web crucially depends on network congestion. In general if we want to quantify the quality of a data link from the user point of view we can use two approaches: 1) A glass box approach in which we know all the system parameters for the network (maximum throughput, loss, delay, buffering) and the application (TCP stack parameters) and then use a model to predict download / session times and UDP throughput. 2) A black box approach where we characterize the system under test with a test signal and derive a set of black box indicators from the output. From these link indicators the download / session times, or other relevant kpis are predicted. The first approach is taken in draft recommendation [1]. This report investigates the second approach. In most of the black box approaches estimation is made of the available bandwidth which is an important indicator for predicting the download and session times. Several approaches exist that allow for bandwidth estimation but bandwidth is not the only important link parameter. For small browsing sessions end-to-end delay and round trip time are also key indicators that determine the session time and thus the perceived quality. A good black box measurement method should not quantify kpis but should be able to predict download and session times for download and browse services that run over the link. Data Links In telecommunication a data link is used to connect one location to another for the purpose of transmitting and receiving the data. It can also be an assemble, consisting of parts of two data terminal equipments (DTEs) and the interconnecting data circuit that is controlled by a link protocol enabling data to be transferred from a data source to a data sink. Two systems can communicate with each other using an intermediate data link which connects both of them. The data link can be made up of a huge number of elements that all contribute to the final perceived quality. In real world connections cross traffic will always have an impact on the download and session times making it difficult to use them in the development of a data link quality model. Therefore in this report most of the model development is carried out using simulated links. The setup was established at TNO-ICT Delft, The Netherlands. A Linux system was used with a network card that has two interfaces to emulate a particular network. Outline In this report chapter 2 describes the problem definition mentioning the tasks that are performed in the project. Chapter 3 explains various key performance indicators that quantify the data link performance. The measurement approach employed and principle behind the chirp is described in chapter 4. The experiment setup at TNO-ICT is described in the chapter 5. In chapter 6 kpis implementation is discussed. In chapter 7 mapping between chirp and service characteristics are discussed. Chapter 8 is conclusion. Some of the measurement results are discussed in Appendix A. The management of the project can be found out in the Appendix B. Problem Definition An operator has to know how to set the network parameters in order to deliver the most appropriate end-to-end quality, based on the network KPIs, the service characteristics and the characteristics of the end user equipment. A fast, efficient method for assessing the impact of a network setting on the final perceived download and session times is thus of vital importance. Plain optimization of KPIs is not necessarily the best strategy because the final perceived quality is determined by the download and session times. In the ideal case a method should be developed with which instantaneous insight can be created into the performance of all services that are carried via the link under consideration. Such a method can also be used to create applications which can take decisions on resource selection or reservation. For small downloads and fast browsing the ping time of the data link will be the dominating factor. For large downloads the available bandwidth will be the dominating factor. The TCP stack parameters also have a significant impact on these times as they determine the slow start behavior. For intermediate file sizes the available band width, the un-congested bandwidth will be important most probably in combination with other kpis like packet loss, buffer size, and possible bearer switching mechanisms (UMTS). This report presents a method that characterizes a data link by a so called delay finger print from which a set of kpis is derived. These kpis are then used to predict the service quality in terms of download/session times. The basic idea, taken in a modified way from [3], is to characterize the data link by sending a limited set of UDP packets over the line in such a way that the delay behavior of these packets allow to characterize the link in as many aspects as possible. Two types of characterization are used, the first one uses a lumped set of packets that are send over the line immediately after each other form which the smearing of a single packet can be estimated. This estimation is closely related to the un-congested bandwidth of the link. The second one uses a train of UDP packets that are separated by an ever decreasing time interval, resulting in a so called data chirp from which the available bandwidth can be estimated. Key Performance Indicators In this project we will focus on the estimation of some kpis from which the quality of a data link can be determined. We will see various data link performance indicators that are dominant in their impact on the end to end session time. Ping Time Ping Time or Round-Trip Time (RTT) is the amount of time that it takes for a packet to go from one computer to another and for the acknowledgment to be returned. In the case of links that span across long distances, the RTT is relatively large, which directly affects the browse and download times. Available Bandwidth Available bandwidth (AB) is the approximate transfer rate that an application can get from a connection in presence of cross traffic load. Measuring the available bandwidth, is of great importance for predicting the end-to-end performance of applications, for dynamic path selection and traffic engineering, and for selecting between numbers of differentiated classes of service [4]. The end-to-end available bandwidth between client and server is determined by the link with minimum unused capacity (referred as tight link). In Figure 3.1 the end-to-end available bandwidth is determined by the minimum unused capacity, indicates is A. Figure 3.1 The available bandwidth determined by tight link unused capacity. Several applications need to know the bandwidth characteristics of the underlying network paths. This is similar to the situation where some peer-to-peer applications need to consider available bandwidth before allowing candidate peers to join the network. Overlay networks can configure their routing table based on the available bandwidth of the overlay links. Network providers lease links to customers and the charge is usually based on the available bandwidth that is provided. Available bandwidth is also a key concept in congestion avoidance algorithms and intelligent routing systems. Techniques for estimating available bandwidth fall into two broad categories: passive and active measurement. Passive measurement is performed by observing existing traffic without perturbing the network. It processes the load on the link and requires access to all intermediary nodes in the network path to extract end to-end information [6]. Active measurement on the other hand, directly probes network proprieties by generating the traffic needed to make the measurement. Despite the fact that active techniques inject additional traffic on the network path; it is more suitable to use active probing measurement in order to measure end-to-end available bandwidth. In communication networks, high available bandwidth is useful because it supports high volume data transfers, short latencies and high rates of successfully established connections. Obtaining an accurate measurement of this metric can be crucial to effective deployment of QoS services in a network and can greatly enhance different network applications and technologies. Un-congested Bandwidth The term un-congested bandwidth (UB) refers to the maximum transfer rate available for a particular connection in absence of other traffic (clean link). For a connection it is hard to achieve transfer rate equal to UB because of various facts like random packet loss, TCP slow start mechanism. The UB is limited by the bottleneck link capacity. The un-congested bandwidthof a link is determined by the link with the minimum capacity (termed as bottleneck link). In Figure 3.2, the un-congested bandwidth of the link between the client and server is C = C1, where C1, C2, C3 are the capacities of the individual link and C1 C3 C2. Figure 3.2 The un-congested bandwidth determined by bottleneck link capacity. Packet Loss Packet loss can be caused by a number of factors, including signal degradation over the network medium, oversaturated network links, corrupted packets rejected in-transit, faulty networking hardware, or normal routing routines. The available bandwidth decreases with increasing packet loss. In this project we will be observing two type of packet losses, i.e., random packet loss and congestion packet loss. These two types of losses are discussed in next chapter. In the next chapter we go into the details of how to measure these kpis. Key Performance Indicator Measurement Approach In this chapter we will discuss how the key performance indicators as described in chapter 3 will be measured. Ping Time Ping is a computer network tool used to test whether a particular host is reachable across an IP network. It works by sending ICMP echo request packets to the target host and listening for ICMP echo response replies. Ping estimates the round-trip time, generally in milliseconds, and records any packet loss, and prints a statistical summary when finished. The standard ping tool in Windows XP was used to determine the ping time. Available Bandwidth Estimation using a UDP Chirp The data chirp method is a method to characterize the end-to-end quality of a data link in terms of a delay fingerprint. Using the data chirp method, a train of UDP packets is sent over the data link with an exponentially decreasing time interval between the subsequent packets. Such a train of packets is referred to as a data chirp. From the delay pattern at the receiving side one can determine characteristic features of the data link such as bandwidth, packet loss, congestion behavior, etc. From the characteristic features one can then try to estimate the service quality of different services that run over the data link. In the classical data chirp [3] the time interval between two consecutive packets m and packet m+1, Tm is given by: Tm = T0 m, 0 1, where T0 is the time interval between the first two packets. The factor (1) determines how fast the interval between subsequent packets in the data chirp decreases. As a result of this decrease, the instantaneous data rate during the data chirp increases. The instantaneous data rate at packet m, Rm, is given by: Rm = P / Tm[bytes/sec] where P is the size of a UDP packet in the chirp. A data chirp is illustrated in [3] and shown in Figure 4.1, consisting of individual packets sent over the link with reduced interval. Figure 4.1 Illustration of a data chirp. The delay of the UDP packets in the data chirp after traveling over the data link is determined relative to the delay of the first packet. The resulting delay pattern, where the relative delay per UDP packet is shown as a function of the packet number, is referred to as data chirp fingerprint. A typical data chirp finger print for a fixed bandwidth 64 kbps bit pipe without cross traffic is shown in Figure 4.2 Figure 4.2 Data chirp fingerprint for a fixed bandwidth bit pipe of 64 kbps. From such a data chirp fingerprint a number of parameters can be determined, including Available Bandwidth, Random packet Loss, Congested Packet Loss and Un-congested Bandwidth [5] In the chirp packets are send individually over the line but with a continuously decreasing sending time quantified by a factor (1) resulting in an inter sending time Tm for the mth packet. The combination of T and chirp size N determines the lower and upper bound of the throughput of the UDP stream. It is clear that at the start of the chirp packets should be send over the link with large enough time intervals in order to be able to characterize low bandwidth systems. Furthermore the inter-sending time should decrease to a value so low that the required bandwidth is higher than the maximum expected available bandwidth. Finally a small allows for a fast characterization of the link while a near 1 allows more accurate, but more time consuming, bandwidth estimations. After some initial experiments the values of the chirp were set to P = 1500 byte, T = 200 ms, = 0.99 and N = 400, resulting in a lower and upper input into the system of 64 and 5000 kbit/s respectively. This chirp provides a well balanced compromise between measurements. Figure 4.5 gives an overview of this approach. Figure 4.3 Data chirp, send to estimate AB, using the idea of self induced congestion with ever smaller inter sending times. This approach was first tested over the virtual tunnel interface running over the Linux machine. When this chirp was put over a clean link (i.e. over a tunnel with no cross traffic) the estimation of available bandwidth is a bit higher then the actual bandwidth set by the netem GUI. This is due to the buffers present in between which pass the chirp over the link with a higher speed. Due to this factor the estimation is about 20% higher then the actual link speed. The second problem faced in this scheme was that when we send the chirp over this link with a cross traffic the finger print we get from the chirp was not good enough to get a correct estimation. The reason behind this is that the chirp tries to push in through the TCP cross traffic and the time it is successful there is a high packet loss due to which we cant make proper estimation. So we send this chirp repeatedly (4 times) with inter sending time of 5 seconds. We can estimate the available bandwidth using: Rm = P / Tm, [bytes/sec] where P is the packet size and T is the average interval at that time. Un-congested Bandwidth In principle, un-congested bandwidth can be estimated from the smearing of a single packet. Even in the case that there is cross traffic on a data link and we would like to estimate the bandwidth for the clean situation we can use the smearing time. In a congested link a single packet is still smeared according to the un-congested bandwidth. However, obtaining the smearing time of a single packet is difficult to achieve with normal hardware equipment. Therefore packets are sent in pairs as close as possible (back-to-back) after each other. This allows to assess smearing time with normal hardware because we can now measure the receive time stamps of each packet and deduce the smearing form this. This method will only work for the situation where the chance that cross traffic will be sent over the data link between the packets is minimal. Figure 4.4 illustrates the packet pair smearing measurement method. P2 P1 Figure 4.4 The use of packet pairs for the determination of the un-congested bandwidth. Tr is the time when the first bit of packet starts to arrive. Tr is the time when last bit of the packet is received. Ts is the time when the first bit of packet is set on the link. As illustrated in Figure 4.4, the packets leaving the data link are smeared compared to the original packets, indicated as T. This T is determined from the time interval between the arrival of the first and second packet in the pair. From this smearing, the un-congested bandwidth UB can be estimated using: UB = P2 / T, [bytes/sec] where P2 is the size of the second packet, in bytes. To estimate the un-congested bandwidth we implemented the method which is described above, i.e., sending packet pairs over the link to estimate un-congested bandwidth. However, buffering can cause measurement problems; when data is stored and forwarded the link speeds preceding the buffer are no longer taken into account in the un-congested bandwidth estimation. This can for a major part be solved by using a lumped set of packets that vary between 1 and N concatenated packets. Packets in lumps can no longer be stored in the small buffers of the link. In the current proposed measurement method we start with a single packet and then concatenate packets till a lump of seven packets from which point the seven lumps is repeated. The reason for not using more packets in the lump was because of the underlying Windows mechanism that does not allow sending more then seven packets. If we increase it then from the first lump of more then seven random packet loss occurs. The lumps are sent using a chirp like approach as given in section 4.2. The length of this series is dependent on the experimental context and an optimal choice is somewhere between 20 and 50 lumped packets for links with a speed between 100 and 10,000 kbit/s. In the final chirp P is set to 1500 byte, the start interval is set to 200 ms with a = 0.97. With N=50 this choice is a compromise between a wide range of speeds that can be assessed, measurement time and measurement accuracy. In most cases these concatenated packets will be handled immediately after each other by all routers and from the so-called packet smearing times a data link characterization is made that has high correlation with the un-congested bandwidth of the link. This bandwidth estimation is always higher than the available bandwidth, since availability is influenced by possible cross traffic on the data link. Test results obtained for un-congested bandwidth are presented in chapter 7. Figure 4.5 provides an overview of extended chirp. Figure 4.5 Extended data chirp using the idea of measuring the smearing times of concatenated packets. By measuring receive time stamps the smearing of a packet can be measured when two or more concatenated packets are sent over the link. For this approach the Un-congested bandwidth can be determined by using: UB = P / T . [bytes/sec] In this lumped chirp version, P is the size of the lump and T is the time difference between the first and the last packet in the lump. Random Packet loss The Random Packet Loss is determined from the packets before the bending point. In theory before this point no packet loss should occur and by checking whether packets have been lost during the transmission of these first packets the random packet loss can be determined. Congested Packet loss At a specific point all buffers are filled to their maximum and then the delay per packet cannot increase anymore because of this buffering. This is the point where packets are being lost due to congestion on the link. This packet loss can be determined from the chirp fingerprint. Experimental Setup The setup was established at TNO-ICT Delft, The Netherlands. The Linux system with the network card which has two interfaces is used to emulate network. Figure 5.1 Experimental setup used for the simulations. Software Setup There exists a module called Netem in the Linux kernel which provides functionality for testing protocols by emulating the properties of wide area networks. The current Netem version emulates variable delay, loss, duplication and packet re-ordering. End users have no direct access to the Netem module and Netem can be accessed using traffic control (tc) command. User can directly using tc commands can direct Netem to change hardware interface settings. The GUI for the tc command is developed termed as Netem PHPGUI which can be accessed via web server. Client Application We have developed an application in Borland Delphi which runs on a Windows XP machine. This client application generates the chirp pattern. Standard TCP/IP stack is used which is present in Windows XP. There are different parameters which can be set through this application like, packet size, interval between the chirps etc. Server Application Application is developed which runs on a machine acting as a server. It also has the same TCP/IP stack which is in Windows XP. The server application dumps the chirp information into the files which are further used to post process and get the kpis out of the information received. A web server also runs on this machine to get the service characteristics for the FTP and browsing. Key Performance Indicator Measurement Implementation As discussed above lumped packet and a single packet data chirp are sent from client to the server. The first lumped chirp pattern sent is used to estimate the un-congested bandwidth of the link. Here instead of sending a single packet, a lump is formed by concatenating several packets and it is sent over the link. After a certain time gap (5 seconds) next chirp pattern consisting of a single packets (unlike first chirp) is sent which is used to get delay signature and the various parameters of the data link associated like available bandwidth, random packet loss, congestion packet loss. The experiments are carried on different link which are emulated using network emulator as mentioned in chapter 5. The sending time and receiving timestamps of each chirp packet is logged and post-processing is done in order to extract the various parameters which are mentioned below. Estimation of Un-congested bandwidth from packet smear times When a packet is sent on a data link it is smeared in time. Smear time is the time difference between the time at which the complete packet is received and epoch at which packet starts arriving. Figure 6.1 depicts the smear times of the packet received at the server side. Figure 6.1 Packet Smear Times. The smear time of the packets is logged into the file. Due to software limitations it is very hard to distinguish between the time packet starts arriving and time packet is completely arrived. Because of this, smearing time is computed from the receiving timestamps of the next and previous packets. It can happen that packets will be dropped which will lead to a wrong estimation of the smear time. Therefore if a packet drop is observed then the smearing time is not computed between two packets of which there exists a packet at transmission but not at the reception. It can also happen that multiple packets are dropped. If there is a packet loss inside a lump of packets, the smearing time is estimated from the maximum number of packets where not a single packet is dropped in between those packets. The logic behind this is depicted in the flowchart as shown in Figure 6.2. Figure 6.2 The un-congested bandwidth calculation from smear times. Estimation of available bandwidth from bending point of chirp signature The difference between the times at which the packet is received and the time at which the packet is sent is termed as delay. The packet sending timestamps is placed in the packet on the client side and received timestamp is estimated at the server side. As client and server both are not time synchronized the time synchronization is applied by subtracting the delay of first packet from all the delays. Such a delay is referred as differential delay. Figure 6.3 Chirp Finger print. Differential delay is used to generate the chirp signature by plotting the estimated differential delays against the packets received. As one can see from Figure 6.3, the differential delay suddenly increases (in this particular case, this happens around the 90th packet of the chirp train). This is the point where the link is completely occupied by the chirp packets and other cross traffic packets if present. At this point the rate at which chirp packets are sent represents the available bandwidth of the data link under consideration. Chirp Behavior versus Service Characteristics The measurement set up used an internet simulator to manipulate the packet loss, buffer size, bandwidth and delay of a single PC to PC network link. We have collected delay finger print data related to the chirp and the session time that quantify service quality. The following parameters were used for these experiments: Packet loss between 0 and 10% in the downlink. Buffer size in the downlink between 1 and 30 number packets. Bandwidth in the downlink between 64k and 1M bit/s. Delay in the up and downlink between 1 and 300 ms. From the large set of possible combinations a subset of conditions were chosen. In each condition six measurements were carried out: A HTTP browsing session time measurement using three files in the following time line: empty cache, start browse, 2 kByte download, 10 kByte download, 70 kByte download, end browse (browse small, medium, large, respectively) A HTTP download of a 4 kByte file (download small) A HTTP download of a 128 kByte file (download medium) A HTTP download of a 4000 kByte file (download large) A ping round trip time measurement A data chirp measurement using the pattern as described in chapter 4. A standard Windows XP TCP/IP stack was used and for some conditions the system showed bifurcation behavior. This can be expected since an acknowledge can be received just in time or just too late depending on infinite small changes in the system. In all cases where this behavior was found the minimum download/session time was used in the analysis. Experiments were performed under different data link scenarios by changing buffer sizes, packet loss, and delay, with and without competing cross traffic. Several chirp characteristics were used in order to fend the optimum settings. When a particular data link is considered for the experiment, above mentioned service characteristic parameters were measured (session times) and later on the data link with same conditions, a data chirp is sent and the data link key performance indicators were computed from the chirp signature. The experimental observations are discussed in Appendix A. The correlation between the link capacity and the un-congested bandwidth estimations were excellent, the correlation was 0.99 (see Figure 7.1). Figure 7.1 Un-congested bandwidth estimation. The correlations between the service characteristics (browsing/download times) and kpis estimated from chirp delay pattern showed lower correlations. The results show that the small browsing session times are dominated by the ping time and the un-congested bandwidth. Figure 7.2 and 7.3 show the relationship between the measured small browsing session times, the small FTP download times and the best two dimensional predictor that could be constructed from the ping time and the un-congested bandwidth. This predictor is the best kpi that can be constructed and shows a correlation of 0.92 for the small browsing data and 0.98 for the small download data. Figure 7.2 Small browsing session. Figure 7.3 Small FTP download. For medium and large browsing/downloading it was not possible to fit to any combination of kpis (up to three dimensions) that had a satisfactory correlation (0.9). The available bandwidth gives the highest correlation for these measurements, around 0.7. In the case of clean data links, the correlation between available bandwidth and the link capacity was found to be 0.93. In the case of one TCP cross traffic stream, the available bandwidth estimation did not show an acceptable correlation with the data link capacity. Conclusion In this project a black box measurement approach for assessing the perceived quality of data links is implemented. This quality is defined as the measured browsing and download session times. The measurement method uses the concept of a data chirp. In general a data chirp puts data on a link with ever increasing sending speed. The delay behavior of the packets is then used to characterize the link. The chirp is implemented in two different ways, the first one uses a set of lumped packets from which the un-congested bandwidth is estimated, the second uses a set of single packets from which the available bandwidth is estimated. Together with the ping time this allows a full characterization of the data link. From the data link characterization a prediction model for the session times is constructed. The model shows a correlation of 0.98 for the small download data set and of 0.92 for the browsing data set over a number of small pages. The model uses a two-dimensional regression fit derived from ping time and the un-congested bandwidth. For medium and large browsing/downloading it was not possible to fit to any combination of kpis (up to three dimensions) that had a satisfactory correlation (0.9). The available bandwidth gives the highest correlation for these measurements, around 0.7. Besides the session times the model also allows estimating the link capacity. The correlation between the real link capacity, as constructed with the network simulator and the chirp estimated un-congested bandwidth was 0.99. List of Acronyms ADSL Asymmetric Digital Subscriber Line FTP File Transfer Protocol GSM Global System for Mobile communication GPRS General Packet Radio Service ITU International Telecommunications Union LAN Local Area Network PSTN Public Switched Telephone Network TCP Transmission Control protocol TNO Nederlandse Organisatie voor Toegepast Natuurwetenschappelijk Onderzoek (Netherlands Organisation for Applied Scientific Research) WIFI Wireless Fidelity UDP User Datagram Protocol UMTS Universal Mobile Telecommunications System References [1] ITU-T E.800: Recommendation E.800 (08/94) Quality of service and dependability vocabulary. [2] M. Jain, C. Dovrolis, Pathload: a Measurement Tool for Available Bandwidth Estimation, Proc. PAM02, 2002. [3] V. J. Ribeiro, R. H. Riedi, R. G. Baraniuk, J. Navratil, L. Cottrell, pathChirp: Efficient Available Bandwidth Estimation for Network Paths, Paper 3824, Passive and Active Workshop, April 2003, La Jolla, California USA. [4] R.Prasad, M. .Murray, C. Dovrolis, K. Claffy Bandwidth Estimation: Metrics,Measurement Techniques, and Tools, IEEE Network, November-December 2003 issue. [5] TNO Report: Gap Analysis Circuit Switched Versus Packet Switched Voice and Video Telephony [6] ITU-T Rec. G.1030, Estimating end-to-end performance in IP networks for data applications, International Telecommunication Union, Geneva, Switzerland (2005 November). [7] Ahmed Ait Ali, Fabien Michaut, Francis Lepage: End-to-End Available Bandwidth Measurement Tools:A Comparative Evaluation of Performances,CRAN (Centre de Recherche en Automatique de Nancy)UMR-CNRS 7039 Appendix A Measurement Results Lumped Chirp Tests Lumped chirp test with cross traffic 5% Loss. In the first test a TCP cross traffic stream was generated by sending 50 files of size 1MB each in a loop in such way that the slow start does not become active again. A loss of 5% was set for this situation, after analyzing the behavior of TCP with ethereal we send the extended chirp over the link to estimate the un-congested bandwidth. From the graph we can see that in the start the estimations are quite high, this behavior is totally dependant on the state of TCP mechanism rather the stream is in the stable state or it is still in slow start. Following parameters were set for this experiment: Number of Lumps: 50 Maximum lump size: 7 packets Packet size: 1500 bytes Interval: 100 ms Alpha: 0.97 Links Speed: 1000 Kb/s Loss: 5 % Cross Traffic: TCP Cross traffic 50 files of 1Mb. Estimated Bandwidth versus Packet Lumps Figure.1 Results achieved by sending the extended chirp over a link of speed 1000Kb/s. The average estimated bandwidth estimated over the time interval is 1030Kb/s. Due to the loss set for the test we can easily judge that the number of observations are reduced to 34 (ideal link 45). This higher bandwidth estimation is because of the higher bandwidth calculated with the small lump sizes. This is due to the buffers present in the link which avoids the smearing effect on the packets. We have also tested the same scheme with different scenarios. The average estimated bandwidth in the case of no cross traffic or with a loss over the link is a bit higher then the actual fixed bandwidth. Just in the case of loss we get less number of estimations as loss in packet cause a drop of reading. Lumped chirp test Real links. Test 1: TNO Internal Network For this test the extended chirp was sent over the network in TNO. In this case the only thing which cane be tweaked are the parameter related to the chirp. The other factors, we do not have any idea what policies are implemented over TNO network. Following settings were used for this test. Number of Lumps: 50 Maximum lump size: 7 packets Packet size: 1500 bytes Interval: 100 ms Alpha: 0.97 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown. Estimated Bandwidth versus Packet Lumps Figure.2 Results achieved by sending the extended chirp over TNO Network. From the figure.11.2 we can see that there is a packet loss over the network as the observations for lumps are 30 (Under ideal clean link 45), and the estimation drops due to the factor of self induced congestion at higher rate. The averaged estimated un-congested bandwidth calculated over the time is 2.10 Mbs. From the figure above one can judge that there are policies imposed in between so if a certain stream of data tries to eat a bit higher part of bandwidth the router may not allow the stream to do so. Test 2: Delft -Eindhoven over the Internet For this test the extended chirp was sent the internet between Delft and Eindhoven. In this case the only thing which cane be tweaked are the parameter related to the chirp. The other factors, we do not have any idea what policies are implemented over TNO network Following parameters were set for this test: Number of Lumps: 50 Maximum lump size: 7 packets Packet size: 1500 bytes Interval: 100 ms Alpha: 0.97 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown. Estimated Bandwidth versus Packet Lumps Figure.3 Results achieved by sending the extended chirp over a link between Delft and Eindhoven. From the figure 3 we can see that there is a packet loss over the network as the observations for lumps are 32 (Under ideal clean link 45), and the estimation drops due to the factor of self induced congestion at higher rate. The averaged estimated un-congested bandwidth calculated over the time interval is 2.30 Mbs. This experiment shows the behavior same as in the case of TNO network that after some times the router doesnt allow the stream of data to eat up a larger part of the bandwidth instead restricting it to a limit. Data Chirp Tests Data chirp test with cross traffic 5% Loss. In this test a TCP cross traffic was generated over the virtual tunnel link, we wait for a while so that TCP comes out of its slow start; we send the repeated chirp to estimate the available bandwidth of the link. The following settings were used for the experiment: Number of Packets: 400 Packet size: 1500 bytes Interval: 200 ms Alpha: 0.99 Links Speed: 1000k Loss: 5% Cross Traffic: TCP Cross traffic 50 files of 5 Mb. Differential delays versus Number of Packets Figure.4. Data chirp over data link of 1000k with TCP cross traffic. The estimated available bandwidth achieved through the chirp was 839 kb/s. This behavior can be justified in a sense that when ever there is a packet loss TCP tries to re-adjust itself and releases some part of the utilizing bandwidth which is not in the case of UDP so what ever bandwidth is available UDP tries to eat it up. Data chirp test Real links. Test 1: TNO Internal Network In this experiment the repeated data chirp was sent over the network in TNO so we can only adjust the parameter related to the chirp, the other factors we do not have any idea what policies are implemented over TNO network. The following settings were used for the test: Number of Packets: 400 Packet size: 1500 bytes Interval: 200 ms Alpha: 0.99 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown. Differential Delays versus Number of Packets Figure.5. Data chirp over data link of TNO. From the figure.5 we can see that there is a packet loss over the network as the observations for number of packets is less then 400 (Under ideal clean link 400 with out loss), and the finger print does not look smooth. There are quite few excursions in the fingerprint which can be due to the underlying buffers in between the links. The available bandwidth estimated from this finger print is 491.92 kb/s. Test 2: Delft -Eindhoven over the Internet In this experiment the repeated data chirp was sent over the internet between Delft and Eindhoven. We can only tweak the parameter related to the chirp, the other factors we do not know what policies are over the internet. Number of Packets: 400 Packet size: 1500 bytes Interval: 200 ms Alpha: 0.99 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown Differential Delays versus Number of Packets Figure. 6. Data chirp over internet between Delft and Eindhoven. From the figure.6 we can see that there is a packet loss over the internet as the observations for number of packets is less then 400 (Under ideal clean link 400 with out loss), that could be due to congestion or any other factor in between. The available bandwidth obtained from this finger print is 706.60 kb/s. There are excursions present which could be due to the buffers present in the link. Performance parameters like random packet loss and congestion packet loss are calculated through the observations which are received at the receiver side. Random packet loss is calculated before the bending point and congestion packet loss is calculated after the bending point. In the following section analysis of different scenarios are discussed. First we examined a clean link with parameters: Interval 200ms, loss 5% and delay 10ms.Delay 10ms represents a small simulated network. We run the tests with varying bandwidth, loss and delay. Figure 7. Data chirp test over different links. Figure.7 shows the out come of data chirp sent over a clean simulated link with bandwidth set to 64kbps, 256 kbps, 1000 kbps and 5000kbps respectively. In above experiment we did not have any idea about the under lying buffers. After post processing we were able to extract all the kpis from the above signatures but the only thing which is noticeable here is in the figure 7(a) that the bending point comes too early rather in the start of the signature which is not good enough, one cant judge the link with the behavior of just one or two packets. This behavior is seen only in the link of 64kbps alternative to this is to increase the interval time but that doesnt give a correct estimation of the available bandwidth. We use the same scheme as mentioned earlier to estimate the un-congested bandwidth i.e. over the lump of seven packets. In the following case we put a TCP cross stream i.e. a large file download over the links to test the developed method. Figure. 8. Data chirp test over links with one TCP cross traffic. Figure.8 shows the out come of data chirp sent over simulated link with bandwidth set to 64kbps, 256kbps, 1000kbps and 5000kbps respectively and one TCP cross traffic stream. We can easily observe in the above figure that there is oscillation type behavior present. This is due to the presence of the cross traffic and underlying buffers in the link. Under this situation its not easy to extract the kpis easily. Even in the case of un-congested bandwidth the method did not works as due to the cross traffic and packet loss we were not able to receive the whole lump intact. We then modified the approach to get the un-congested as well as the available bandwidth. For Un-congested bandwidth the method was modified in a sense that instead of calculating the bandwidth over the whole lump we first find the maximum lump received and calculate the bandwidth from it and then from the rest of the readings. There was no change in the parameters for the estimation of un-congested bandwidth. This gives us a good estimation of un-congested bandwidth. To get an estimation of the available bandwidth we tweaked some of the parameter for the data chirp inter sending time between lumped chirp and data chirp was set to five seconds, Interval was set to 500ms and alpha to 0.98. Figure.9. Data chirp with Interval 500ms and alpha 0.98 over clean links. Figure. 10. Data chirp with Interval 500ms and alpha 0.98 with one TCP cross traffic. Figure 9 and 10 shows the finger print of the chirp after tweaking the parameters. It is clear from the figures that the finger print is quite clean with no oscillations present. But the problem caused by this adjustment is that the estimation achieved after post processing of the data is not accurate. The estimation achieved is quite high even then the actual bandwidth of the link. Although we were able to get the correct estimation of un-congested bandwidth this approach was not taken into consideration. The second approach which we took was smoothing of these finger prints. A window size of 25 is used where differential delays from next 25 packets are averaged and the chirp signature is smoothed. This approach was quite helpful enough in estimating available bandwidth. As those oscillation effects were removed in a good way and we were able to get the estimation correctly. This smoothing was done on the observations received with the following parameters: Interval 200ms, alpha 0.99 and inter-sending time between data chirp five second. Figure.11. Data chirp with Interval 200ms and alpha 0.99 with one TCP cross traffic after smoothing. Figure.11 shows the smoothed finger prints which are shown in figure. 8 With the use of this smoothing technique estimation can be easily done even for the links which have quite small buffers present for all these observations the buffer size was set to thirty. Low buffers will have greater impact on the finger print of the chirp so for that we have to adjust the window size accordingly. Appendix B End-to-end data Link Quality measurement Tool Revision History Version Date Changes 1.0 20.2.2007 Initial version 2.0 25.3.2007 Results. Delimitation added. Results Updated. Phasing plan. Realization Phase added. Control Plan. Time/ Capacity added. Information Updated. Quality Updated. Organisation added. 3.0 05.06.2007 Information Updated Organisation Updated Test Plan Added Introduction The developments in information technology of the last years have led to major advances in high-speed networking, multimedia capabilities for workstations and also distributed multimedia applications. In particular, multimedia applications for computer supported cooperative work have been developed that allow groups of people to exchange information and to collaborate and co-operate joint work. However, existing communication systems do not provide end-to-end guarantees for multipoint communication services which are needed by these applications. In this thesis, communication architecture is described that offers end-to-end performance guarantees in conjunction with flexible multipoint communication services. The architecture is implemented in the Multipoint Communication Framework (MCF) that extends the basic communication services of existing operating systems. It orchestrates endsystem and network resources in order to provide end-to-end performance guarantees. Furthermore, it provides multipoint communication services where participants dynamically join and leave. The communication services are implemented by protocol stacks which form a three layer hierarchy. The topmost layer is called multimedia support layer. It accesses the endsystems multimedia devices. The transport layer implements end-to-end protocol functions that are used to forward multimedia data. The lowest layer is labelled multicast adaptation layer. It interfaces to various networks and provides a multipoint-to-multipoint communication service that is used by the transport layer. Each layer contains a set of modules that implement a single protocol function. Protocol stacks are dynamically composed out of modules. Each protocol uses a single module on each layer. Applications specify their service requirements as Quality of Service (QoS) parameters. The shift from PSTN/GSM/GPRS to ADSL/Cable/WiFi/UMTS technology and the corresponding shift from telephony to multimedia services will have a big impact on how the end-to-end quality as perceived by the customer can be measured, monitored and optimized. For each service (speech / audio / text / picture / video / browsing/ file download) a different perceived quality model is needed in order to be able to predict the customer satisfaction. This project proposal focuses on an integrated approach towards the measurement of the perceived quality of interactive browsing and file downloading over a data link. Results Problem Definition: To place the overall end-to end QoS problem into perspective, it is clear that the emergence and rapid acceptance of Internet and Intranet technologies is providing commercial and military systems with the opportunity to conduct business at reduced costs and greatly increased scales. To take advantage of this opportunity, organizations are becoming, increasingly dependent on large-scale distributed systems that operate in unbounded network environments. As the value of these transactions grows, companies are beginning to seek guarantees of dependability, performance, and efficiency from their distributed application and network service providers. To provide adequate levels of service to customers, companies eventually are going to need levels of assured operations. These capabilities include policy-based prioritization of applications and users competing for system resources; guarantees of levels of provided performance, security, availability, data integrity, and disaster recovery, and adaptivity to changing load and network conditions. The problems which are faced by the networks today is that they dont provide the services which they are made for so the problem here is to set the network parameters in such a way that they give the maximum output and resources are used in a good manner. Project goal: The ultimate goal of the project is to define a measurement method, with which instantaneous insight can be created into the performance of all services that are handled carried via the link under consideration. It is clear that the operator delivering the best portfolio with the best quality for the lowest price will survive in the market. This means that an operator has to know how to set the network parameters in order to deliver the best end-to-end quality, based on the network indicators, the service characteristics and the characteristics of the end user equipment. This optimization will increase the number of new subscribers, leading to an increase in revenues. A fast, efficient method for combined data/streaming quality measurement is thus of vital importance. Results: At the end of the project the deliverables will be: A tool that will be able to test the performance as well as the status of the network. A report that shall accurately describe the processes and explain the choices made. An analysis of the results and a list of recommendations to achieve the best possible results. A presentation of the results to the TNO-ICT and ECR group at TU/e. Delimitation The project is focusing on estimation of available as well as un-congested bandwidth. So testing can only be done on the currently available networks in the company. Since this is a research activity, the approach will be based on the trial and error method. PHASING PLAN Initial phase: Nov2006 Jan 2007 During this time period focus will be given on current work going on in the market and also what kind of tools are available and can be used to develop this new method for measurement of link performance. All background and necessary knowledge will be gathered. Design phase: Feb2007-April2007 Following activities are planed after initial phase: 1) Investigation of hardware timing accuracy in order to improve measurement accuracy (find the best hardware available). 2) Creation of a test set up that allows investigating the effects of cross traffic. 3) Creation of a bearer switching like link simulator to investigate the effects of traffic channel capacity and channel capacity switching. 4) Effect of Packet loss. 5) Buffer impact measurements. The measurements will be based on simulations, results will be plotted and analysed. Preparation phase: May2007 These measurements are focused on the relation between: 1) Browsing/download times and chirp delay pattern. 2) Audio/video streaming throughput and chirp delay pattern. 3) Re-buffering, pause/play, fast forward behaviour with audio/video streaming in relation to chirp behaviour. All above mentioned observations will be made on simulated as well as real networks using the developed tool. Test Phase: June2007-August2007 The activities involved in the realization phase will be: 1) Decide upon the nature of the approach that is going to be used. 2) Generate results (Cross traffic maps). 3) Collect results and create a solid set for the evaluation. 4) Write down in the report the pros and cons of the method. 5) New set of experiments and reevaluation. 6) Discuss results with supervisors at TNO-ICT. 7) Update report with new findings. Control Plan Time/Capacity Norm Data Starting date: 01.11.2006 Completing date: 31.08.2007 Final report: Unknown yet Final presentation: Unknown yet Duration of the phases: Initiation phase: 6 (+/- 1) weeks Design phase: 14 (+/- 2) weeks Preparation phase: 2(+/-1) weeks Realization phase: 14 (+/- 2) weeks Quality: Quality of measurement approach will be discussed with supervisors at TNO-ICT. Information: Phase Output Status Literature Study Study of background research papers Done Project proposal document Done Study documentation for NS-2, TCL, Delphi Done Develop small applications in Delphi to get familiar with the tool. Make TCL scripts for Ns-2 to check behaviour of TCP. Done Experiment Setup Install Fedora 6 on Dell machine Done Recompile Linux kernel with tweaking some kernel parameters Done Install PHP GUI for network emulator Done Develop a Chirp method and implement in UDP server and client using Indy socket library in Delphi. Done Simulation and Measurement Experiment using virtual tunnel network interfaces. Do simulations to observe the behaviour of the developed method. Done Result Analysis Analyse Results achieved from experiments over Tunnel. Analyse Results over real networks. Done Documentation and Presentation First draft version in last week of july. Final version to be handed in second week of Aug. Organization: Progress Control: Frequent meetings will be arranged with the project manager to show progress of the project. Risk Analysis: Slow in implementation (lack of knowledge or un-expected errors) Get help form co-workers in the company Requirement change request from project advisor can lead to delay of designing measurement approach.

Monday, December 23, 2019

The Differences Between Women And Women - 1406 Words

As I noted in Part 1 of this series, strands of thought that arise out of political movements are often difficult to categorize and also often answer to many names. The difference approach discussed here, following Haslanger and Hackett,1 may elsewhere be called radical, cultural, or gynocentric feminism. Recall that the basic nugget of thought underlying the sameness approach was the thought that men and women,2 in whatever way matters, are similar enough to warrant similar treatment. Insofar as they are denied similar treatment, they are wronged, and a system that denies them this treatment is wrong or unjust along the dimension of gender. I noted a problem with this approach in the first essay, which was that similar treatment is not always the best answer to the kinds of wrongs women face and which feminism seeks to alleviate. The difference approach may be seen as an attempt to offer a feminist alternative that avoids this pitfall. Whereas the sameness approach responds to the sexist—who claims that men are better than women in some relevant way—by asserting the sameness of men and women, the difference approach responds by turning the sexist’s argument on its head—at least sometimes, perhaps in many domains, women are better than men. Of course, what is claimed is really closer to this: traditionally feminine attributes and qualities have been undervalued or devalued by a male-dominated society and should be revalued to reflect their true worth. Depending on theShow MoreRelatedThe Difference Between Women And Women1346 Words   |  6 PagesWomen throughout history has always had some sort of disadvantage to our male counterparts. Whether it was a difference in job opportunities and pay rates, with the prevalence of double standards, or not having the right to vote like men were able to do. Women were always seen as inferior to men, but being African American and a woman, had much m ore to endure than that of white women. African American women had to be strong willed, not knowingly that this characteristic of black women and their identitiesRead MoreDifferences Between Men And Women907 Words   |  4 PagesMen and women have distinct differences in communication styles. Women are categorized by being more discussion oriented, while men are more action oriented. Depending on home environment and the way parents raise men and women, men sometimes are the ones who communicate most. Men who are raised around women are more apt to become more in touch with their sensitive side leading to being able to express their feelings more freely. Men and women are taught, through childhood guardians, to soar in differentRead MoreDifferences Between Men And Women905 Words   |  4 PagesHistory explains the story of how men and women have always been different. From the anatomy in the size of brains to life expectancy, men and women are each distinct. Camille Lewis points o ut that the difference between males and females is that each is biologically different. I disagree with Lewis that men and women innate their biological differences because their differences are also influenced by factors of the outside world in time. Men and women develop differently because they are drivenRead MoreDifference between men and women867 Words   |  4 Pages 9/22/13 Differences between Men and Women For centuries, the differences between men and women were socially defined through a lens of sexism, in which men assumed to be superior over women. The vision of equality between the sexes has narrowed the possibilities for discovery of what truly exists within a man and women. The world would be less interesting when everything is the same. Today none of us would argue that men and women are physically different, but they differ emotionally, and mentallyRead MoreDifferences Between Men And Women1338 Words   |  6 PagesOver the course of history, men and women have be faced with a communication barrier. The differing communication skills between men and women present challenges that can lead to foreseeable problems in relationships. These problems arise out of differing purposes, styles, traits, and emotions that accompany communication between the two sexes. Unless an understanding is reached, these barriers may never be broken down. The structure of men and women’s brain differ, which is the underlying causeRead MoreThe Differences Between Men And Women1171 Words   |  5 PagesHave you ever wondered what the differences are between men and women psychologically, and biologically? Men and Women are extremely different in many ways. Men act in certain ways and express themselves differently from how women do. There are many differences between the two genders including communication skills, biologically different, and the cultural stereotypes that have separated the two genders socially. Many relationships end due to the fact of misunderstanding each other and how to communicateRead MoreThe Differences Between Men And Women1180 Words   |  5 PagesBattle of the Sexes (What is the differences between men and women?) Man and women were made is the beginning of time. The reason why, we will never know, but throughout history these two beings have been compared through every aspects of life. Men are the dominate creatures, controlling the earth, while women are the nurturers. The obvious differences between men and women have been capitalized and fought over throughout humankind’s history. Many of the people in the world have strong opinions onRead MoreDifferences Between Men And Women1405 Words   |  6 PagesThroughout history, there have been differences between men and women due to their gender. From different jobs to different roles in society, these differences have affected their lifestyles in the past as well as in the present and may continue into the future. These differences can go as far as to affecting their causes of death. Both men and women share several causes of death, including: heart disease, cancer, stroke, chronic lower respiratory diseases, Alzheimer’s disease, unintentional diseasesRead MoreDifferences Between Men And Women1420 Words   |  6 PagesThe differences between men and women were socially defined and distorted through a lens of sexism in which men assumed superiority over women and maintained it through domination. As the goal of equality between men and women now grows closer we are also losing our awareness of important differences. In some circles of society, politically correct thinking is obliterating important discussion as well as our awareness of the similarities and differences between men and women. The vision of equalityRead MoreDifferences Between Men And Women1746 Words   |  7 Pages Gender Differences in Communication Have you ever thought someone wasn’t listening to you? Or that your request is being ignored because the response wasn’t framed in a way it should have been? Maybe it’s because you were speaking to a person who was of a different gender. Men use short direct speech, while women use indirect dialogue. Therefore, when genders meet up there’s a gap in communication. Men and women unconsciously communicate differently in numerous ways, so by understanding each other’s

Sunday, December 15, 2019

One Source Essay Free Essays

The purpose of this paper is to argue for and against an organization adopting an ethical approach. This essay will look into the two sides of the argument in depth using relevant theories, examples and case studies. The first part of this essay will look into why an organization adopting an ethical approach to management could ultimately benefit the firm. We will write a custom essay sample on One Source Essay or any similar topic only for you Order Now On the other hand, the essay will look at the case against a firm adopting an ethical approach to management. The essay will then conclude by suggesting that it would be important for organizations to act ethically to a certain extent. One definition suggests that ‘ethics are the moral principles that should underpin decision-making. A decision made on ethics might reject the most profitable solution In favor of one of greater benefit to society as well as the firm’ (Marabous, 2003). The key words used in definition are ‘moral principles’, so this definition suggests that acting ethically means acting in a moral way. In essence, an ethical approach to management Is generally acting right to benefit the community and the environment not solely concentrating on maximizing profits. It Is also important to define what exactly acting morally is, one good definition suggested that morality is the notion of what is good and bad (McIntyre, 1998). Argument For In arguing for an organization adopting an ethical approach, the benefits that an organization would gain from this behavior Is that it could be used as a USPS (unique selling point). This is evident in a variety of organizations today, for instance, the Body Shop. Body shop sells products that are kind to the environment, and also boast the fact that they are 100% against animal testing. A key point is that, not only does the many strive to Improve the communities In less developed countries, but it publicizes these actions In order to get support from possible consumers. This strategy appeals to customers a great deal, which implies there are plenty of consumers who choose not to buy products that have been tested on animals so choose to buy products only from the Body Shop. Similarly there are consumers that may not have such strong opinions against animal testing but buy products from the body shop because it seem like the right thing to ad. These ethical approaches to management has seen Body Shop’s profits rise over the years, and are now one of the arrest cosmetic retailers in the country as a result. As well as advertising the fact that they are against animal testing. The Body Shop also promotes community trade. Active self esteem, defending of human rights, and the protection of our planet. Organizations will also gain significant public relations advantages from ethical behavior. There are examples of organizations that have not acted ethically, and as a result have received very negative publicity. One key example would be Nestle. A study in the British Medial Journal said that manufacturers of powdered milk, such as Nestle were breaking international codes by selling their products to West African countries. The Studies were carried out in two West African countries Togo and Burning Fast. Findings from the study showed that Nestle had been Issuing free powdered milk to mothers in these West African countries, officials from Nestle had convinced mothers that powdered milk was actually better for their children than 1 OFF needed to find money in order to purchase this milk. Of course money was not always available so drastic measures were taken, such as over diluting the little dowered milk they had available, or diluting the powdered milk with water that was not very clean. As a result of this children’s health in the region was poor due to lack of nutrition and consumption of contaminated water. The result of this study severely affected the reputation of Nestle. Pressure groups and other activists urged consumers to boycott products from the firm because of the way they had acted in Africa. As a result of this poor publicity Nestle had operating profits fall significantly. This case study is a prime example of how not acting ethically could seriously image the reputation of the firm, so another advantage of adopting an ethical approach to management is that this sort of situation could be avoided. Another major advantage of an ethical approach to management is that an organization could get more out of their workforce. Employees can expect to respond positively to working for an organization that they trust to be acting morally correct. Employees may feel proud to work for a firm that they know is abiding by ethical and moral guidelines. This would also help motivate the workforce and boost their confidence. As a result this could in turn lead to higher productivity from the workforce and ultimately lead to higher operating profits. A positive ethical approach to management could add to the competition for employment at such a firm. An ethical approach to management would also result in a lower labor turnover, because less employees would be leaving the organization if they felt they were being treated right, subsequently all of these reasons would lead to lower costs for an organization I. . Training and paying redundancies. A survey conducted in 2003 even showed that about 75% of The Body Shop’s employees felt ‘proud’ to be working for the organization. According to Banyan (1996) the success of the final solution depends on the capacity of managerial techniques to denude individuals of their dignity and deprive them of their humanity. Argument Against One of the main disadvantages that come with an ethical approach to management are the costs involved when managing ethically. A key example would be the exploitation of cheap labor. Sport manufacturing giant Nikkei has been accused of exploiting cheap labor in Asian markets. A report in Vietnam in 1997 showed that Nikkei had been mistreating women that worked in the factories producing shoes. The women were being paid about $1. 0 per day which was well below minimum wage in America. It was reported that the workforce was even punished for using verbal communication and were only allowed one toilet break during their period of work. From an ethical point of view this is the opposite of how a firm should act, and thus Nikkei received bad publicity for their actions. Although from Nine’s point of view exploiting cheap labor in these Asian markets meant extremely high profits per unit produced, because shoes produced were being sold at around $150. Since the bad publicity and attempted boycotting from pressure groups, Nikkei vowed to act in a ore ethical manner, so paid worker significantly higher wages and also improved working conditions, although this did reduce the amount of bad publicity they were receiving it also meant that Nikkei so their costs soar. Although the company still makes a healthy profit, a more ethical approach to management has meant they are the argument that not all organizations will see a loss in profit for acting more ethically. It will largely depend on what type of organization is in question, for example Marks and Spencer sell organic chocolate and promote the fact that there is air trade between farmers. Although Marks and Spencer do have to pay farmers fairly, they can also charge a premium on their products to maintain profit levels. This way the firm can hit two birds with one stone, because they get positive publicity and a good consumer base, and are able to maintain profit levels. Another Disadvantage of a more ethical approach to management is that it could conflict with existing policies within the organization. A possible restructuring of the organization may need to be done; internal divisions may be created within the business. This of ours is a problem is the workforce is not used to change or does not want change in general. This could lead to lack of motivation of workers which in turn would lead to lower levels of productivity. A company could possibly also experience problems in sending a message in an organization which is decentralized. Even though the workforce may be in favor of a more ethical approach to management it would be extremely difficult to implement it, and additional training of the workforce may be required for maximum efficiency. Conclusion Having argued on both sides of the organization approach, it suffices to state that, it s important for firms or organizations to adopt an ethical approach to management as the advantages clearly outweigh the disadvantages. How to cite One Source Essay, Essays

Saturday, December 7, 2019

The association between tax and liability - MyAssignmenthelp.com

Question: 1. From your firms financial statement, list each item of equity and write your understanding of each item. Discuss any changes in each item of equity for your firm over the past year articulating the reasons for the change. 2. What is your firms tax expense in its latest financial statements? 3. Is this figure the same as the company tax rate times your firms accounting income? Explain why this is, or is not, the case for your firm. 4. Comment on deferred tax assets/liabilities that is reported in the balance sheet articulating the possible reasons why they have been recorded. 5. Is there any current tax assets or income tax payable recorded by your company? Why is the income tax payable not the same as income tax expense? 6. Is the income tax expense shown in the income statement same as the income tax paid shown in the cash flow statement? If not why is the difference? 7. What do you find interesting, confusing, surprising or difficult to understand about the treatment of tax in your firms financial statements? What new insights, if any, have you gained about how companies account for income tax as a result of examining your firms tax expense in its accounts? Answer: Answer 1 Different types of financial statements provide the overall picture of the financial health of the companies. Three major substances of the financial statements of the companies are Assets, Liabilities and Owners Equity (Brigham Ehrhardt, 2013). This part attempts to analyze all the items of equity of CSR Limited, Australia. According to 2017 Annual Report of CSR Limited, four items of equity of the company are Issued Capital, Reserves, Retained Profits and Non-Controlling Interests (csr.com.au, 2017). Issued capital refers to the share capital issued to the shareholders in order to raise capital for business operations. The latest annual report of CSR Limited shows a slight fall in issued capital in 2017 from 2016; that is $ 1036.8 in 2017 and $ 1041.1 in 2016 (csr.com.au, 2017). Total number of share issued in 2017 and 2016 are 504,480,858 and 505,700,315. Less number of issued shares is the reason for the decrease in issued capital in 2017 over 2016 (csr.com.au, 2017). The next item is Reserves. Reserves refer to the excess amount of money paid by the shareholders apart from the par value of shares. As per the annual report of CSR Limited, the amount of reserve was positive in 2016 that is $ 20.4 million; but in 2017, drastic decline in reserve can be observed and it has become negative that is $ (73. 4) million (csr.com.au, 2017). The major purpose of reserve in CSR Limited is reserve for hedge, restive for foreign currency transition, reserve for employee shares, reserve for share-based payment trust, non-controlling reserve and other reserves. In 2017, the major reasons for the decline in reserve are compensation of hedge loss in equity, transfer of hedge profit, recycling of foreign currency, disposal of equity investments, acquisition of treasury shares and non-controlling interest on subsidiary acquisition (Brigham Houston, 2012). The next item in equity is Retained Profit that is the total profit or loss of the organizations from foundation time. The annual report of CSR Limited shows a large increase in retained profit in 2017 from 2016; the amount is $ 191.6 million and $ 123.2 million in 2017 and 2016 respectively (csr.com.au, 2017). The main reason for CSR Limited to increase their retained profit is the increase in profit after tax. The next equity item is non-contro lling interests that shows a massive dip in 2017 as compared to 2016; that is $ 51.5 in 2017 and $ 132.5 in 2016 (csr.com.au, 2017). Answer 2 Companies have to incur different types of expenses like selling expenses, administrative expenses and other operating expenses. One of such important expense for CSR Limited is Tax Expenses. Tax expenses can be obtained by multiplying the tax rate with the before tax income of the companies after the process of tax reconciliation (Thomas Zhang, 2014). It needs to be mentioned that CSR Limited owes their business tax expenses to the government of Australia. According to the regulation of Australian Taxation Law, income tax rate for CSR Limited in 2017 and 2016 was 30%. According to the annual report of CSR Limited, decrease in income tax expenses can be seen; that is $ 61.7 million in 2017 and $ 64.4 million in 2016 (csr.com.au, 2017). CSR Limited has shows their total income tax expenses in two parts; they are Current Tax Expenses and Deferred Tax Expenses for the movement of deferred tax balances. Decrease in current tax expenses can be seen in 2017 from 2016; that is $ 29.3 milli on in 2017 and $ 43.9 million in 2016 (csr.com.au, 2017). On the contrary, increase in deferred tax expenses can be seen for CSR Limited in 2017 from 2016; that is $ 32.4 million in 2017 and $ 20.5 million in 2016. CSR Limited has $ 9.3 million and $ 34.5 million as income tax payable for their consolidated group; that are PGH Bricks Pavers Pty Limited and Gove Aluminium Finance Limited (csr.com.au, 2017). Answer 3 According to 2017 Annual Report of CSR Limited, the company has $ 266.8 million and $ 233.7 million as profit before income tax in 2017 and 2016 respectively. At the same time, CSR Limited has 30% tax rate applicable on profit before income tax (csr.com.au, 2017). In this tax rate, the tax expense of CSR Limited should be $ 80.0 million ($ 266.8 million*30%) in 2017 and $ 70.1 million ($ 233.7 million*30%) in 2017. However, the reported income tax expenses of CSR Limited are $ 61.7 million and $ 64.4 million in 2017 and 2016 respectively. It implies CSR Limited has clear differences between the actual and reported income tax expenses. The observation of the tax reconciliation statement of CSR Limited shows the presence of some specific factors that are responsible for this difference in CSR Tax expenses. These items are adjusted with the calculated tax expenses of CSR Limited after the payment of tax and thus, difference can be seen. The first item is share of net profit of joint ven ture entities. It implies that CSR Limited has earned some profit from their joint venture business and thus, it is required to deduct tax from that portion of profit (Dhaliwal et al., 2013). For this purpose, $ 4.3 million in 2017 and $ 3.7 million has been deducted (csr.com.au, 2017). The next item is non-taxable profit on property disposal. It needs to be mentioned that companies are required to pay tax on the income from disposal of any asset and the same has happened with CSR Limited. For this reason, CSR Limited had to deduct $ 1.9 million in 2017 and $ 5.9 million in 2016. The next substance is under payment and over payment of income tax in 2016 and 2015. For this reason, in 2016, CSR Limited received a tax refund of $ 3.0 million that has been added with the tax expenses. However, in 2017, CSR Limited had to pay the rest of income tax of 2016 that leads to the payment of $ 11.4 million as income tax in 2017 (csr.com.au, 2017). The next item is other items that include the i mpact of permanent differences regarding significant items (Armstrong, Blouin Larcker, 2012). For this reason, $ 0.7 million has been deducted in 2017 and $ 3.0 million has been added in 2016 (csr.com.au, 2017). These above-mentioned factors are responsible for creating difference in income tax expenses in the presence of same tax rate. Answer 4 There are many instances of the payment of extra amount of tax on the assets by the companies and these assets are called Deferred Tax Assets (Laux, 2013). On the other hand, companies have been seen to pay less amount of tax for the liabilities and they are called Deferred Tax Liabilities (Harrington, Smith Trippeer, 2012). In the balance sheet or statement of financial position of CSR Limited, the reporting of both deferred tax assets and liabilities can be observed. As per the statement of financial position of CSR Limited, in 2017 and 2016, the amounts of deferred income tax assets are $ 201.2 million and $ 239.3 million respectively (csr.com.au, 2017). The observation of the statement of financial position of CSR Limited shows that the company has not reported ant differed tax liabilities in 2017 and CSR Limited had $ 20.9 million in 2016 as deferred tax liabilities. Thus, the net deferred tax assets of CSR Limited are $ 201.2 million and $ 218.4 million in 2017 and 2016 respec tively (csr.com.au, 2017). There are some reasons in CSR Limited behind the recording of deferred tax assets and liabilities. The main reason is the cost of depreciation of the assets as difference can be seen in the value of depreciation in income statement and taxable income statement. In CSR Limited, the company has paid advance tax due to the difference in the cost of depreciation and thus, it has been considered as an asset for the company. The main reason for the recording of deferred tax liability is the less amount of tax payment. Answer 5 Current tax assets or income tax receivables generate when the companies have already paid excess income tax in advance. On the other hand, current tax liabilities or tax payable generate when the companies are supposed to pay certain amount of taxes of pervious year in the current year (Hutchens Rego, 2013). Both are important in tax treatment of the companies. As per the annual report of CSR Limited, the company has $ 0.5 million in 2017 and $ 0.5 million in 2016 as current tax assets (csr.com.au, 2017). In addition, the company has $ 10.3 million in 2017 and $ 38.1 million in 2016 as current tax liabilities. The annual report of CSR Limited also shows that the company has $ 61.7 million and $ 64.4 million as income expenses for 2017 and 2016 respectively (csr.com.au, 2017). It can be observed that there is a difference between income tax payable and income tax expenses. After observing the financial notes of income tax expenses of CSR Limited, it can be seen that the income tax e xpenses represents all expenses under for that particular year under tax; they are current tax expenses, deferred tax expenses and current tax payable. It implies that income tax payable is a part of the total income tax expenses of CSR Limited. In addition, the company many not pay the whole amount of income tax payable in the current year. Thus, for all these reasons, there is a difference between the amounts of income tax payable and income tax expenses of the company. Answer 6 In the financial statements, the companies use to report their tax expenses in two places; they are the Income Statement and Statement of Cash Flows. There is not any exception of this fact in case of CSR Limited. The annual report of CSR Limited shows the reporting of tax expenses by the company in Income statement and statement of cash flows. The amount of tax expenses in income statement is $ 61.7 million in 2017 and $ 64.4 million in 2016; and the amount of tax payment in the statement of cash flows is $ 52.7 million in 2017 and 14.6 million in 2016 (csr.com.au, 2017). Thus, a clear difference can be seen in the amounts of tax expenses reporting in income statement and statement of cash flow. It needs to be mentioned that statement of cash flow shows the amount of cash inflow or outflow due to the increase or decrease in the current assets and liabilities of the companies. It implies that the statement of cash flows only takes into consideration the current assets and current lia bilities. Now, the statement of financial position of CSR Limited states that only income tax payable comes under the head of current assets. Hence, it is clear that the statement of cash flows has only shown the payment of income tax payable and it has not included the tax expenses of this current year (Rego Wilson, 2012). For this reason, difference can be seen between the tax expenses in income statement and statement of cash flows. Answer 7 From the above discussions, it can be observed that CSR Limited has carried on their different tax treatments in a well controlled and interesting manner. It needs to be mentioned that CSR Limited has provided all the necessary justification as well as clarification of their tax treatment in the financial notes along with proper calculation. The most important part is tax reconciliation statement and the deferred tax statement. In the tax reconciliation statement, the company has mentioned about all the factors responsible for creating difference between the calculated and reported tax expenses. After observing the tax treatment of CSR Limited, one can come to know the reasons behind the difference in tax expenses recorded in income statement and statement of cash flows. As CSR Limited provided necessary supporting evidence to each tax treatment, it is very much helpful in developing deep insight about the manners of tax treatment of the companies. References Annual Meetings and Reports. (2018).Corporate. Retrieved 6 January 2018, from https://www.csr.com.au/investor-relations-and-news/annual-meetings-and-reports Armstrong, C. S., Blouin, J. L., Larcker, D. F. (2012). The incentives for tax planning.Journal of Accounting and Economics,53(1), 391-411. Brigham, E. F., Ehrhardt, M. C. (2013).Financial management: Theory practice. Cengage Learning. Brigham, E. F., Houston, J. F. (2012).Fundamentals of financial management. Cengage Learning. Dhaliwal, D. S., Kaplan, S. E., Laux, R. C., Weisbrod, E. (2013). The information content of tax expense for firms reporting losses.Journal of Accounting Research,51(1), 135-164. Harrington, C., Smith, W., Trippeer, D. (2012). Deferred tax assets and liabilities: tax benefits, obligations and corporate debt policy.Journal of Finance and Accountancy,11, 1. Hutchens, M., Rego, S. (2013). Tax risk and the cost of equity capital.Available at SSRN9. Laux, R. C. (2013). The association between deferred tax assets and liabilities and future tax payments.The Accounting Review,88(4), 1357-1383. Rego, S. O., Wilson, R. (2012). Equity risk incentives and corporate tax aggressiveness.Journal of Accounting Research,50(3), 775-810. Thomas, J., Zhang, F. (2014). Valuation of tax expense.Review of Accounting Studies,19(4), 1436-1467.