CREN

 

Virtual Seminars

Creating Internet2

Untangling the Web

Campus Communication Strategies

Transcripts

 

Order CDs

Submit Feedback
 


TechTalk  |  Virtual Seminars  |   Glossary

Campus Communication Strategies Transcript

Network Security: Tools and Best Practices

Mark Bruhn
Assistant Director for Information and System Services and Information Security Officer
Indiana University
mbruhn@indiana.edu

Let's look at some tools and best practices when it comes to security and networking in particular.

So, how do we know that we have a problem? Unfortunately, we're still identifying exposures because of intrusions, computer incidents, or operating system problems. We are sharing more and more information between our institutions these days. If we can couple that with more formal risk assessment, then I think we'll be much further ahead.

The DOD Orange Book Guidance is used to evaluate operating system capabilities when it comes to security. However, I've gotten the most benefit from this information when I'm trying to describe to managers or staff the levels of security that we might need in our environment. Generally, operating systems are shipped with the capability of operating at the C2 level, though sometimes it does take parameter and configuration changes to attain that level.

A newer methodology is the Common Criteria for Information Technology Security Evaluation. This method allows for the evaluation of what they call "targets of evaluation." It's a very methodical, structured way to evaluate the security requirements in various situations, and was released for comment last year. I took a look at this myself. At first, it seemed very complex, but once I understood the structure and the way it was applied, it actually seemed to be very comprehensive.

In the old days, we handled some pretty simple exposures on our host systems, mostly dealing with user's sharing passwords and ineffective management of authorization databases. We installed host security software and took care of most of those problems right away. We had closed networks, and the problems associated with those weren't necessarily related to security at all.

In the new environment that we're dealing with, though, the open networks environment with UNIX, we see that there's untrusted computers everywhere, untrusted communications. The intruders are organized. There are a couple good things in this new environment, though. We see that desktop processing capabilities are increasing dramatically, allowing for encryption on the desktop. Increased commerce on the Net is generating more demand for security solutions, and certainly more interoperable products are allowing us to use the same security solutions in various implementations.

Let's talk a little bit about some specific security concerns related to networks. Certainly, cleartext data is a problem that we need to talk about quite a bit. Primarily, what we've been worrying about is passwords to systems being intercepted and used to change data. However, data about people and other sensitive data is also traversing the network in the clear, and simple disclosure and invasion of privacy should be a real concern.

An attacker can sometimes replay traffic recorded with a sniffer, minimally to confuse the destination host, but at times to cause some undesirable action on the host as well.

An attacker can delay or deny IP messages by changing the screening and routing rules used by routers, or by overwhelming one of the end systems with large amounts of network traffic. The SYN attack and the Ping of Death are examples of service denial attacks. In the TCP SYN attack, a sender transmits a volume of connections that cannot be completed. This causes the connection queues on the host to fill up, denying service to legitimate TCP users. As for the Ping attack, it's possible to crash, reboot, or otherwise kill a large number of systems by sending a ping of a certain size from a remote machine. This is a serious problem, mainly because this can be reproduced very easily and from a remote location.

There is currently no widely-implemented way of guaranteeing the integrity of an IP datagram. An attacker who modifies the contents of a datagram can also recalculate and update its header checksum and the datagram recipient will be unable to detect the change.

Address masquerading occurs when an attacker configures his network interface with the same address as another computer. This can gain the attacker access to resources intended for the true owner of that address, since access to some services (such as NFS is only contingent upon the use of a correct network address. Of course, address masquerading is limited to machines on the same network.

Address spoofing are attacks in which intruders create packets with spoofed source IP addresses. These attacks exploit applications that use authentication based on IP addresses. This exploitation leads to user and possible route access on the targeted system.

Most routing protocols are susceptible to false route update messages, since they don't use secure authentication mechanisms. IP also supports a source routing option that allows an attacker to specify a routing path packets will travel along to their destination.

I would not be able to give you any information better than what you would find on the Ping of Death page. This page is maintained by Milacci Kenny and contains information about the problem, and some good information about vulnerable operating systems. In fact, there's an extremely large chart on this site where you can find your operating system and see if that problem has been taken care of, or if you need to worry about it.

In the same way, good information about the SYN attack is maintained on the CISCO page.

Here's a list of possible solutions for some of the problems that we've just discussed. Certainly this is not an exhaustive list, and some of these are very expensive to implement and to administer. In order to keep network traffic away from persons who might have the inclination to use it inappropriately, we can limit the connections to a particular network segment to only authorized users. The traffic generated on a secure subnet should then directly go across a hopefully more secure fiber backbone to a host on a secure data center subnet. In cases where two groups use the same subnet, a bridge should be used to limit the traffic as necessary.

There are two approaches that you can take when you're building your access control list. The first is Deny Unless Specifically Permitted. This means that you need to identify high-risk subnets in your environment and in your router control list, make sure that those are denied access to sensitive hosts. The second is Permit Unless Specifically Denied. This is going to require you to have an entry in your control list for every subnet where users are going to have to have access to these sensitive hosts. This is going to be very difficult to administer, and expensive. In both of these instances, you will have to perform periodic audits to make sure that the criteria that was established is still being met.

Encryption is the act of taking cleartext data and applying a key and an algorithm, changing it into ciphertext. In that way, it's masked so that only the recipient with the appropriate key can change it back into cleartext data again. There are two basic types of encryption; secret or private key encryption and public/private key pair encryption.

In secret or private key encryption, a secret key is applied to the plaintext, changing it into ciphertext. The recipient must have the same secret key in order to change that text back into plaintext.

In public/private key encryption, the receiver's public key is used to encrypt the plaintext data into ciphertext. Only the receiver, using their private key, can change that ciphertext back into plaintext.

There are various situations in which we need to authenticate the identity of an entity. The most common, of course, is authenticating the identity of a user attaching to a host. However, we have processes on hosts that also need to be authenticated to processes on other hosts. More and more users are interested in ensuring that the mail that they are receiving is actually coming from the sender identified in the message. Digital signatures are being used in this way and Pretty Good Privacy (or PGP) is becoming more popular for this reason.

There are three basic ways of authenticating users; something they know, something they carry, and something they are. Generally, it's accepted that if two of these methods are used, that's going to be an adequate authentication of the identity in most situations.

Password generator devices or password tokens are becoming more popular as a secondary method of authenticating users. There are two basic types of dialogues that a user will go through. The first, they simply look at the face of their device and they take the one-time password from the card face and enter it into the terminal session. The second is a little bit more complicated, and implements a challenge/response dialogue. In the simple dialogue, the users connects to a protected host; the host requests a response or one-time password; the user enters the characters displayed on their hand held device at that moment. If the response is valid, then access is permitted. In the more complex challenge/response scenario, user connect to a protected host; the host displays a challenge to the user. The user enters a PIN number into their hand held device. They enter the challenge from the terminal session into the device, and the card displays a response. The user enters that response into the host session. If the response is valid, access is permitted.

Kerberos is now a well-known model, developed to allow network applications to identify their peers. It involves trusted hosts relying on key servers to pass tickets instead of passwords. Kerberos solves the age-old problem of passwords traversing the network in the clear. What Kerberos does not address when used alone are the human factors of sharing, writing down, or having common passwords between protected and unprotected systems, as well as someone else guessing badly-chosen passwords. There are also a couple of other concerns. The key server in this environment becomes a critically sensitive resource. In addition, a dictionary attack is possible against the responses from the Kerberos server, since the ticket-granting tickets are encrypted using the user's passwords as the key. A dictionary attack can be used against a ticket until the clear text is derived. Kerberos 5 allows for the interface with password token systems, so this problem is practically eliminated. Also, the tickets are kept in memory on both the client and the servers, so security of these depend on the robustness of the security on those systems.

A Firewall is a set of rules used to control information flow between trusted networks and the world. A firewall can be used to screen out the world outside of the local network entirely or partially, or screen certain local hosts from users inside the perimeter, or to keep users from within the local environment from accessing services and hosts outside of the environment. Though that's not something we do too much in our university settings, you could imagine a corporation using this technique a little more.

There are a few very common firewall types. The simplest are packet filters, and can be either on the router or accomplished on a computer host. The filter would block specific protocols, or limit or disable services such as NFS or telnet, or limit access to specific domains or hosts. In another implementation of a firewall, all traffic for a particular subnet or host may be directed to an application gateway or proxy. This proxy would determine access authorizations and either reject the traffic or permit access. After allowing access, the proxy simply passes data back and forth between the user and the service. Sites would advertise the name of the application proxy and not the name of the screened hosts.

A method that's been around for a while and is just getting more press is GSS, the Generic Security Services API. The main point with GSS is that it's independent of language environments or operating systems. Communications programs or applications can make calls to the GSS server. The GSS server establishes a security context in which the security calls are made.

DCE is gaining popularity with many operating systems and applications vendors building in support for this service. The security service under DCE is based on Kerberos, which allows hosts and users (called principals) to authenticate one another. DCE security makes use of cryptographic checksum techniques that insure data integrity by allowing corrupted data to be detected easily. In addition, DCE security provides a DCE registry service which allows easy administration of the principals database, and a distributed access control mechanism that allows users and administrators to control access to resources. The DCE services include a Remote Procedure Call, which facilitates client-server communication so that an application can effectively access resources distributed across the network; the security service, a directory service which provides a single naming model throughout the distributed environment; a time service which synchronizes the system clocks throughout the network; a threads service which provides multiple threads of execution capability; and a distributed file service which provides access to files across the network.

More and more activities are taking place on the Web. With that, some very good thought is being given to securing transactions and forms in that environment. Standard Web service supplies simple authentication methods along with the simple access control lists. S-HTTP is designed to support end-to-end secure transactions. It allows for full flexibility of cryptographic algorithms, modes, and parameters. Basically, the server and the client negotiate the terms of the transaction and several different cryptographic algorithms and certification selections can be agreed on. Any message can be signed, authenticated, encrypted, or any combination of these. Multiple key management mechanisms are provided, including shared secrets/public key exchange, as well as Kerberos ticket distribution. Secure Socket Layer, or SSL, is becoming very popular. It allows for varying encryption levels as well as authenticating the server to the client. Support exists for various key exchange algorithms as well as hardware tokens. SSL provides for channel security in that all messages are encrypted, the server connection is always authenticated, and the client connection can be optionally authenticated. There's also a message integrity check to insure channel reliability, and application protocols other than HTTP can layer on top of SSL, like telnet or ftp. I've certainly not been able to cover all of these issues in any depth. There are many Internet references available for someone looking for additional resources. Using any of the topics I've covered as the search key will get you an ample supply of information.

About CREN © CREN, 1999 Contact Us

[Top of Page]