Papers by Shawn Ostermann
Scalable Computing: Practice and Experience, Mar 1, 2001
... Downloads (12 Months), 52. View colleagues of Brett Tjaden. Shawn Ostermann ... ACM has opted... more ... Downloads (12 Months), 52. View colleagues of Brett Tjaden. Shawn Ostermann ... ACM has opted to expose the complete List rather than only correct and linked references. 1. {1} D. Anderson, T. Frivold, and A. Valdes. Next-generation Intrusion Detection Expert System (NIDES). ...
Parallel and Distributed Computing Systems (ISCA), 2005
Telecommunication Systems, 2002
Elevating flap-folding and flap-anchoring head of automatic carton closing machine is provided wi... more Elevating flap-folding and flap-anchoring head of automatic carton closing machine is provided with balancing suspension by flexible tether means from support tower or equivalent in any of a variety of ways so that progressively applied forces of operational actions upon top of each carton as it is progressively advanced and closed therebeneath is easily accommodated and offset by such supporting mechanism and reaction thereof.
A heterogeneous distributed database system (HDDBS) is a system which integrates preexisting data... more A heterogeneous distributed database system (HDDBS) is a system which integrates preexisting database systems to support global applications accessing more than one database. The difficulties and general approaches of global concurrency control in HDDBSs are studied. In particular, the author discusses the difficulties of maintaining global serializability in HDDBSs. The need for new strategies is established, and counterexamples to existing algorithms are given. An assumption usually made in addressing multidatabase systems is that the element databases are autonomous. The meaning of autonomy and its effects on designing global concurrency control algorithms are addressed. It is the goal of this study to motivate other researchers to realize the need for new correctness criteria by which concurrency control algorithms can be validated
Computer communication software presents the abstraction of a single, global communication system... more Computer communication software presents the abstraction of a single, global communication system, known as an internetwork, over which any connected computer can send information to any other connected computer. The information transfer service provided by an internetwork, however, is not guaranteed to be reliable; information can be lost, delayed, duplicated, or corrupted. A piece of communication software known as a transport protocol is responsible for providing a layer of reliability above the internetwork layer. A transport protocol provides the abstraction of a reliable mechanism for the exchange of information between applications. An exchange of information between applications, or conversation, consists of units of data called messages. For applications that rely on relatively short-lived conversations, existing widely-used transport protocols impose a high efficiency penalty because of the amount of time required by the transport protocol to establish, manage, and terminate short-lived conversations. This dissertation defines a conceptual model of a transport protocol and the underlying internetwork, and characterizes the various aspects of protocol reliability as being Arrival, Order, Uniqueness, Integrity, Replay, and Performance. Using these models of the transport protocol, the internetwork, and the various aspects of reliability, this dissertation describes the Simple Reliable Message Protocol, SRMP, a new message transport protocol. SRMP uses novel reliability mechanisms that allow it to efficiently manage short-lived conversations. SRMP's performance is compared to the performance of existing transport protocols in the areas of short-lived conversation efficiency, the amount of protocol acknowledgement information required, and performance in the presence of congestion. Experiments comparing SRMP to existing transport protocols show that SRMP manages short-lived conversations more efficiently than existing, widely-used transport protocols. Additional experiments indicate that decreasing the amount of acknowledgement information returned by SRMP, by extending the acknowledgement interval, can further improve the protocol's performance in certain cases. Further experimental data shows that SRMP's congestion control mechanisms are equivalent to the mechanisms used by the commonly used transport protocol TCP, whose congestion control mechanisms have been widely studied.
In this work, we collect and analyze all of the IP and TCP headers of packets seen on a network t... more In this work, we collect and analyze all of the IP and TCP headers of packets seen on a network that either violate existing standards or should not appear in modern internets. Our goal is to determine the reason that these packets appear on the network and evaluate what proportion of such packets could cause actual damage. Thus, we examine and divide the unusual packets obtained during our experiments into several categories based on their type and possible cause and show the results.
The specification for the File Transfer Protocol (FTP) contains a number of mechanisms that can b... more The specification for the File Transfer Protocol (FTP) contains a number of mechanisms that can be used to compromise network security. The FTP specification allows a client to instruct a server to transfer files to a third machine. This third-party mechanism, known as proxy FTP, causes a well known security problem. The FTP specification also allows an unlimited number of attempts at entering a user's password. This allows brute force "password guessing" attacks. This document provides suggestions for system administrators and those implementing FTP servers that will decrease the security problems associated with FTP.
With the growing threat of abuse of network resources, it becomes increasingly important to be ab... more With the growing threat of abuse of network resources, it becomes increasingly important to be able to detect malformed packets on a network and estimate the damage they can cause. Carefully constructed, certain types of packets can cause a victim host to crash while other packets may be sent only to gather necessary information about hosts and networks and can be viewed as a prelude to attack. In this paper, we collect and analyze all of the IP and TCP packets seen on a network that either violate existing standards or should not appear in modern internets. Our goal is to determine what these suspicious packets mean and evaluate what proportion of such packets can cause actual damage. Thus, we divide unusual packets obtained during our experiments into several categories depending on the severity of their consequences, including indirect consequences as a result of information gathering, and show the results. The traces analyzed were gathered at Ohio University's main Internet link, providing a massive amount of statistical data.
Lecture Notes in Computer Science, 2003
ABSTRACT Communicating data in deep-space, across interplanetary distances, entails constraints s... more ABSTRACT Communicating data in deep-space, across interplanetary distances, entails constraints such as signal propagation delays in the order of minutes and hours, high channel error characteristics, meager and asymmetric bandwidth availability, and disruptions due to planetary orbital dynamics and antenna scheduling constraints on Earth. The licklider transmission protocol (LTP) is being designed as a reliable data transmission protocol optimized for this environment. We present a dynamic priority paradigm for LTP jobs that may help improve the volume and value of data communicated in deep-space by quantifying each job's Intrinsic Value and Immediacy . We study convolutional codes, Reed-Solomon codes, Raptor codes, and some of their combinations, over various channel error rates. We show how the appropriate application of these mechanisms to each job, based on its Immediacy and Intrinsic value, can improve the aggregate value of data transferred over the channel across various job mixes.
This document specifies the preferred method for transporting Delayand Disruption-Tolerant Networ... more This document specifies the preferred method for transporting Delayand Disruption-Tolerant Networking (DTN) protocol data over the Internet using datagrams. It covers convergence layers for the Bundle Protocol (RFC 5050), as well as the transportation of segments using the Licklider Transmission Protocol (LTP) (RFC 5326). UDP and the Datagram Congestion Control Protocol (DCCP) are the candidate datagram protocols discussed. UDP can only be used on a local network or in cases where the DTN node implements explicit congestion control. DCCP addresses the congestion control problem, and its use is recommended whenever possible. This document is a product of the Delay-Tolerant Networking Research Group (DTNRG) and represents the consensus of the DTNRG.
There are many factors governing the performance of TCP-basec applications traversing satellite c... more There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP's performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of
SpaceOps 2006 Conference, Jun 19, 2006
Deep-space presents a challenging environment for communication by exhibiting constraints such as... more Deep-space presents a challenging environment for communication by exhibiting constraints such as very high signal propagation delays, high channel error-rates, meager and expensive bandwidth availability, etc. Further, a Spacecraft participating in a deep-space mission typically carries a host of scientific instruments-each capable of generating data at different rates coupled with different reliability and timeliness needs for communication. In this paper we present a two-dimensional Priority Paradigm designed for applications operating in this scenario. The first dimension is Immediacy, a measure of how urgently data needs to be delivered; Orthogonal to that is Intrinsic Value, a measure of how reliable the data delivery needs to be. We then study the performance of two candidate mechanisms for implementing the Priority Paradigm policy sought with the two-dimensional priority-paradigm parameters requested: Adapting the FEC mechanism in use and Adaptively varying the packet size in use, for various link error rate characteristics.
This paper outlines the main results of a number of ACTS experinlents on the efficacy of using st... more This paper outlines the main results of a number of ACTS experinlents on the efficacy of using standard Internet protocols over long-delay satellite channels. These experinlents have been jointly conducted by NASA's Glenn Research Center and Ohio University over the last six years. The focus of our investigations has been the inlpact of long-delay networks with non-zero bit-error rates on the performance of the suite of Internet protocols. In particular, we have focused on the most widely used transport protocol, the Transmission Control Protocol (TCP), as well as several application layer protocols. This paper presents our main results, as well as references to more verbose discussions of our experinlents.
UDP Convergence Layers for the DTN Bundle and LTP Protocols draft-irtf-dtnrg-udp-clayer-00 Status... more UDP Convergence Layers for the DTN Bundle and LTP Protocols draft-irtf-dtnrg-udp-clayer-00 Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt.
FTP Extensions for IPv6 and NATs Status of this Memo This document specifies an Internet standard... more FTP Extensions for IPv6 and NATs Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited.
The specification for the File Transfer Protocol (FTP) assumes that the underlying network protoc... more The specification for the File Transfer Protocol (FTP) assumes that the underlying network protocols use a 32-bit network address and a 16-bit transport address (specifically IP version 4 and TCP). With the deployment of version 6 of the Internet Protocol, network addresses will no longer be 32-bits. This paper specifies extensions t.o FTP that will allow the protocol to work over a variety of network and transport protocols.
In several experiments using NASA's Advanced Communications Technology Satellite (ACTS), investig... more In several experiments using NASA's Advanced Communications Technology Satellite (ACTS), investigators have reported disappointing throughput using the TCP/IP protocol suite over 1.536Mbit/sec (T1) satellite circuits. A detailed analysis of FTP file transfers reveals that both the TCP window size, and the TCP "Slow Start" algorithm contribute to the observed limits in throughput.
Computer Communication Review, Oct 15, 2004
Current congestion control algorithms treat packet loss as an indication of network congestion, u... more Current congestion control algorithms treat packet loss as an indication of network congestion, under the assumption that most losses are caused by router queues overflowing. In response to losses (congestion), a sender reduces its sending rate in an effort to reduce contention for shared network resources. In network paths where a non-negligible portion of loss is caused by packet corruption, performance can suffer due to needless reductions of the sending rate (in response to "perceived congestion" that is not really happening). This paper explores a technique, called Cumulative Explicit Transport Error Notification (CETEN), that uses information provided by the network to bring the transport's long-term average sending rate closer to that dictated by only congestionbased losses. We discuss several ways that information about the cumulative rates of packet loss due to congestion and corruption might be obtained from the network or through fairly generic transport layer instrumentation. We then explore two ways to use this information to develop a more appropriate congestion control response (CETEN). The work in this paper is done in terms of TCP. Since numerous transport protocols use TCP-like congestion control schemes, the CETEN techniques we present are applicable to other transports as well. In this paper, we present early simulation results that show CETEN to be a promising technique. In addition, this paper discusses a number of practical and thorny implementation issues associated with CETEN.
Performance evaluation review, Dec 1, 2003
Estimating loss rates along a network path is a problem that has received much attention within t... more Estimating loss rates along a network path is a problem that has received much attention within the research community. However, deriving accurate estimates of the loss rate from TCP transfers has been largely unaddressed. In this paper, we first show that using a simple count of the number of retransmissions yields inaccurate estimates of the loss rate in many cases. The mis-estimation stems from flaws in TCP's retransmission schemes that cause the protocol to spuriously retransmit data in a number of cases. Next, we develop techniques for refining the retransmission count to produce a better loss rate estimate for both Reno and SACK variants of TCP. Finally, we explore two SACK-based variants of TCP with an eye towards reducing spurious retransmits, the root cause of the mis-estimation of the loss rate. An additional benefit of reducing the number of needless retransmits is a reduction in the amount of shared network resources used to accomplish no useful work.
Uploads
Papers by Shawn Ostermann