Campus Communication Strategies
|
TechTalk | Virtual Seminars | Glossary Campus Communication Strategies TranscriptNetwork Security: BasicsMark BruhnAssistant Director for Information and System Services and Information Security Officer Indiana University mbruhn@indiana.edu Hello. For the next 45 minutes or an hour, we're going to be talking about security. We'll be talking about general security practices and philosophies, and then we'll talk about some tools, some vulnerabilities, and some solutions and best practices. First, though, let's take a look at some information about the kind of people that we're dealing with. When we're out looking around on the Web or the Internet for security sites, we usually come across some very interesting ones. It gives us a good idea how people are viewing security, how intruders are working with one another. This particular site was found on the Web. It's called a Cracker FAQ. This one happens to deal with Novell, but certainly there are many others out there about various operating systems in that way. The questions here were linked to pages full of information about how to handle each one of these situations. This particular site contained several pages of software reviews and document reviews and book reviews, all documents and books, sites referring to cracking tools, hacking tools and virus creation tools. At the bottom there, you see that the virus creation lab is on there, and this particular individual thought that that virus creation lab was "great." On this page, you'll notice that there's some information there about dealing with technology and about how to break into systems. This particular page was maintained by a system administrator for a very large consulting firm. Certainly, there's no evidence that the consulting firm sanctioned this page. In the first paragraph there, you'll see that it says "Use public site computers..." In our particular institutions, unless we're very careful about who we are letting into our student computing facilities, certainly those could classify as public site computers in this way. At the bottom is an issue that we deal with quite a bit as well. He's indicating that, in his particular instance, he doesn't have time to do as thorough checking as he'd like to on the systems that he maintains, meaning that he doesn't have time to make sure that appropriate patches are put on, holes are plugged, and so on. Certainly, in our institutions, we probably have the same problem, and that's something that we really need to think about. This particular page contains text extracted from an online document about bomb making. The first statement there is very interesting, certainly not related to information security specifically, but more to physical security. This person is indicating that the best place to steal chemicals is a college, and he goes on to say on this page that this is because normally students or faculty will not check people going through the buildings to see if they belong there; and that getting into chemistry labs and so on to get those chemicals is relatively easy to do. Certainly, that's a general statement and doesn't specifically mention any of our universities on that page. The second statement there is one that we really need to pay attention to, I think. "College campus security is pretty poor, as a rule." Again, the physical security aspect there more than the information security aspect, I'm sure. Let's talk about some general security practices and approaches; that is, some philosophies, some concepts, and some thoughts about some experiences that we've had over the last couple of years. First, let's set some security context. When security professionals are trying to talk to each other and to explain to management and other staff about security issues, we generally try to stay in certain categories. Identification, authentication, authorization and accountability are generally sometimes referred to as IAAA. Identification -- certainly we want to make sure that we're identifying every user uniquely when they're accessing our applications on our systems. Sometimes it's appropriate, however, to identify users anonymously or to use guest access, but certainly we need to keep those to a minimum, especially in instances where you're going to need to go back and identify who took a particular action at some point. Obviously, when we do permit anonymous or guest access, we need to make sure that these users are contained in sort of a security bubble so they can't gain access to more sensitive areas on the same server. Authentication -- we need to make sure that the user that's accessing the systems or the data are authenticated to a level that's appropriate for the sensitivity of that data and those applications. Once they are authenticated, we need to make sure that they're only authorized to access the applications and the data that the data managers or the business owners have authorized them to access. In the area of accountability, we want to make sure that the person that's accessing the system remains identified to that way throughout their session. That is, we don't want to allow them to change their identity in the middle of a session, nor do we want any unauthorized users to hijack or take over that session before it's ended appropriately. We talk a lot about data integrity, making sure that the databases, the data in our files, are maintained and updated only via approved methods, via the applications or via the database management systems themselves. And certainly, privacy is always on our minds as we try to protect individual privacy and privacy of our institutions. Some basic security thoughts that we need to probably accept as we're going into a formal security setting -- we want to make sure that we understand that security should be used to allow more open access, data managers should be able to comfortably allow access to folks perhaps that have not had access before without worrying about opening that access up, causing additional security problems. I think we need to understand also that there is no single security solution. It's not like locking a single door any more. There are many technologies, there are many servers. Our campuses are very large and the technical expertise is increasing dramatically. A security strategy is essential while you're planning how to formally approach security, how you're going to present security to university administration or management, to staff, to faculty. You need to be able to outline what it is that you're trying to accomplish; some goals, some objectives, and then make sure that everyone understands and, of course, accepts that that's the way that things need to go. Solutions design strategy has some very specific components. We need to understand that implementing solutions is going to be iterative. We're going to have to look at very high risk situations first. Once we take care of those, then certainly we'd go on and take care of some of the lesser risk exposures until we think we've got everything taken care of. But of course, as we add technologies and components into our environments, we'll need to go back and take another look on exposures created by adding those to the environment as well. Certainly, we can't do this alone. When we implement security measures, we have to talk to many people. There are many stakeholders, business unit managers, staff, faculty, university administration, even external agencies that rely on us for data and must be consulted. We must put in security measures that aren't overly cumbersome, that meet the needs of the security areas and also allow the business units to keep in business. Analysis must be responsive, as I mentioned a minute ago. Certainly things are being added to our environments on a regular basis. We must be able to get to those as soon as we can, take a look at what exposures they might create in our environment, and take care of those. Solutions are accumulative. Some of the solutions that we put in place might very well depend on other solutions that we already have in place. We need to make sure that if we do that, if we need to replace one of those, that the other one is accounted for as well and make sure that things keep working in that way. Architecture must be evolutionary. Certainly, new technologies are coming along all the time. If we write something in-house, something three or four days later or maybe six months later might replace that, and then of course, that's a vendor supported solution and you can replace your in-house method. When we first started looking at formal security issues at Indiana University, we sat down and we developed a list of objectives, things that we wanted to make sure that we talked about, that we covered in our environment. Certainly this list is more important because we sat down and we talked about it, we developed a list. We understood what we were looking at. As you look at this list, there are some obvious ones there. As I mentioned before, you want to uniquely identify users and adequately authenticate their identity. Passwords on the network is still a problem. Data on the network, a little bit harder to take care of, a little bit more expensive in machine time, but certainly we wanted to think about that as well. And, at the bottom, we wanted to make sure that IU's business functions are not inhibited unnecessarily by the security components that we put in place. As we were talking about those objectives, we knew that we had some things that we had in place, that we could rely on. Having the list is more important than the items in it, probably, but as you look down through this list, the first one there is the most important, probably. We want to make sure that our users are educated, that they understand why we're doing security, why it's necessary to protect the data as an institutional asset, and why it's important that security also protect them as they're going on about their business. Some more components along the same lines: IU data management structure in place. We have a committee of data stewards. The point there is that there is a university effort to manage data access, to manage data administration. We rely on those groups heavily. They meet periodically. In addition, we've added some full time staff as we've increased our formal security process. We've sent folks to conferences, we rely a lot on colleagues at other universities and other corporations as well. One of the things that I try to make clear when I'm talking to people about security is that the ethical and legal protections that are required are the same regardless of how that access is provided or how that data is delivered. There are some very nice new environments out there, the Web being one of those. Sometimes people tend to think of those and think that they can trade off some delivery mechanisms, some nice delivery interface, over the security. We need to make sure that they continue to think about security in that way. As far as security and technical staff involvement in making decisions about applications security, I think it's very important that the security and technical staff understand the technical environment, the data delivery environment, and they must be able to adequately describe that to the business managers, the people that are actually managing that data. The technologists should build an adequate tool set for that data manager to use, based on the sensitivity of the data. They should be able to pick a PIN number, for example, for data where that's appropriate, or perhaps they want to use a password token for more sensitive data in that way. However, what I do think is that the data steward of the data being represented is ultimately responsible, but that means that the technologists have to be prepared to describe to them enough of the technology so that they can make that decision without it coming back to haunt them later. The question about who does security is still discussed over and over at various places. Some ideas to think about -- the technology experts are very busy and as you talk to them, you'll notice that they each have much different opinions about security requirements, how much security might be required, how much security that their particular system might need. Certainly the business unit staff also are concerned with their mission, registering the students, bringing in the money, checking out library books, whatever that might be. The point is, a dedicated security administrator or team can focus on security. That is, you can focus your paranoia, if you will, into one or two or more people in that way. They can investigate security issues, look at security exposures, go to conferences, bring that information back and share that with the other people in that community. Should security staff report inside or outside of technology units? Auditors will tell you on occasion that having security staff report to technology directors or managers might present an opportunity for conflict of interest. I can just tell you that, in my early tenure as a security professional, I reported to an administrative director. That person was more interested in budget and more interested in administering the department than any technical issues, certainly. Since a couple of years ago, I've reported to technical directors in the computing department, and that's actually worked out very well with no problems at all. Despite what they tell you, security analysts cannot know everything about the technology that's in our environments. If that was going to be the case, then we would probably need a staff of ten or 12 people dedicated to various operating systems and networking that we have. I maintain that security staff should have a broad base of medium technical knowledge about the technologies that they're dealing with. More importantly, they should be also fully aware of who to go to, who the technical gurus really are, to get help that they need. Security staff need very good interpersonal skills in order to be able to talk with business units, managers to administrators to staff. Have them understand about security and why things are the way they are. In addition, the security staff should have a decent knowledge of how businesses run in various areas so that they can ask the appropriate questions of those business unit managers when it comes to security issues and exposures. Security and technical and business staff must get together and be objective when they are looking at needs in their areas of responsibilities and make sure that the business mission can still be accomplished and that the users are not overly inconvenienced about what is being put in place. I say there that auditors are our friends. Certainly I don't mean that to be a sarcastic statement; in fact, just the opposite. I rely on our audit staff, both internal and external, to help me when I'm talking to staff members and managers, business units, about exposures. It does help because you get a second opinion, and they are the experts in control situations. Response teams are very helpful, external and internal. I suggest that you have a local team of technical experts. This should be the focus of education and training. This team should investigate incidents, should be also reporting to management on how those incidents are happening and how they're taken care of. In addition, as part of that team, probably would want a public relations and a legal member to give that kind of advice as well. The CERT™ Coordination Center has been around for a while. They send out advisories on a regular basis. They also have an archive service and send summaries out so you can get caught up if you need to. FIRST, the Forum of Incident Response Teams, is newer. It is a consortium of local incident response teams. The focus of this group is information sharing, and they use digital signatures and encryption to insure the confidentiality of the information that's shared. There are reasons to maintain various inventories about what's in our environments, the most important of which is probably a technical inventory. There are some standard reasons why we would probably want to do this, including capital asset management or tracking of hardware maintenance costs. However, there are some very good security reasons as well. In order to perform good risk assessment, you need to be able to identify what operating systems, what network protocols, what bridges and other components that are in your environment, so when you do get advisories you'll know that they apply or that they don't. We need to know what human and other resources we have available to us as well. Funding certainly has always been a problem in our environments. We need to be able to prove that security is worthwhile and does not unduly inhibit business processes. Data management is very important; we need a date management structure within the university, data managers meeting, data stewards meeting and talking about managing data, access policies and so on. We also want to rely on our technology users' groups. University counsel certainly is very important, if legal aspects are discussed. Incident response teams, as we mentioned, are very important as well, both internal and external to our institutions. It's important to understand what policies we have in our environment. When security professionals are visiting with staff members and business units, they must be able to cite policies and not be making unilateral decisions based on specific situations. An important thing for us to realize is that we have too many technologies to spend all of our time on in our environments. This means that we must assess risk, determine acceptable risk levels, and apply protections at the highest risk levels. There are some fairly simple steps in assessing risk. First you need to establish a methodology. Certainly, there are companies out there that will sell you their services for this purpose, but however you get to that methodology, it's important that everyone understand that that's the methodology that's going to be used and will accept the results. You need to make use of your policies, your technology and your resources inventory. Then you need to identify and evaluate vulnerabilities based on risk. You need to prioritize those vulnerabilities, highest risk first; implement protections on those, then go back and take care of that next tier of vulnerabilities. Certainly, this is an iterative process, and as we eliminate exposures in our environment, changes made by bringing in new technologies are going to identify more that we'll have to handle. I hope that you've been able to gain some insight if you are trying to establish a security function or if you're struggling with managing or staffing an existing security function.
| ||||
[Top of Page] |