Internet Security Privacy Policy

Friday, September 7, 2007

Calling the CyberCops: Law Enforcement and Incident Handling

A velvety darkness enfolds the room. From somewhere just on the edge of awareness a strange, rhythmic pulsing disturbs your sleep, yanking you rudely into the conscious world. For a few unreal moments you are disoriented and anxious, until your brain processes the sensory information flooding into it and reaches the conclusion that your beeper is going off. After scrambling madly in the dark, knocking over your bedside lamp, you eventually retrieve the offending little box and peer blearily at its antiseptic charcoal-on-gray message.

It's now 3:00 AM and you're sitting at a console in your computer room at the office, staring at a new directory named "ADMROCKS." You've been hacked. Your personal data space has been violated. Some nameless script kiddie has made a mockery of your well-laid security plans. What are you going to do about it?

History suggests that you'll clean up the mess, file a report with your boss, and maybe, if you're particularly community-minded, post some sanitized logs or exploit scripts to a public computer security forum such as the Security Focus "Incidents" mailing list. I mean, there's no point in calling the cops, right? They can't or won't do anything about it, right? Isn't that what you read all the time in news stories and in complaints posted on the Internet?

To a certain extent, there is some truth to this assertion. There are many thousands of computer intrusions reported each year, and their numbers having been growing far more rapidly than the staffing and training efforts of law enforcement agencies have been able to accommodate. Life is like that sometimes. Does this mean it's pointless to report your breakin to the cops? Of course not. It just means that you need to optimize the quality of the data you provide to them, in order to maximize the chances that they'll be able to help you. Garbage in, garbage out applies here as surely as it does to virtually every other aspect of IT operations.

Most of what we know, or think we know, about cops is based on television shows and movies. Law enforcement is a perennial favorite topic of the entertainment industry, and portrayals run the gamut from self-sacrificing throw-yourself-on-the-live-grenade types to amoral robotic enforcers to frankly evil psychopathic criminals. Law enforcement agents have power to curtail our liberty, or at least ruin our day, and so we either fear or envy them (depending on whether or not you want to be one). Fear breeds loathing and mistrust; envy just breeds more envy. These are marketable emotions from Hollywood's point of view, so it doesn't take much of a conceptual leap to see why it is in their interests to exaggerate the potential for scandalous conduct by officials of the public trust.

The truth is, as it often turns out to be, far less interesting. Cops are just people; they make mistakes, feel impatience, cut corners, daydream, overlook things, and generally behave just like every other human being on this hapless planet. There is nothing that can be or, in my opinion, should be done about this. I like to believe that I'm dealing with other human beings, fallibilities notwithstanding; I'm more comfortable among my own kind. People who never make mistakes give me the heebie-jeebies. But hey, maybe that's just me.

Swinging around once more to the question of whether or not to involve the authorities in your IT crime scene, think about this: are you (and your senior management) willing to provide the resources, both in terms of technical expertise and downtime of the affected system(s), necessary for any chance at a successful investigation and prosecution? Remember that the purposes for which law enforcement agencies exist, as all of us my age or older know from watching "Adam-12," are to 'serve and protect.' Protect people and assets from assault, serve by pursuing and delivering suspected lawbreakers to the judiciary establishment for trial.

Obviously once you've been hacked it's a little late for the 'protect' part, so we should shift our attention to 'serve.' The police will serve you by investigating the crime and, if possible, bringing the responsible party/parties to justice, but only if that's what you want them to do. You're the one who has suffered a loss; you're the one who needs to initiate the process of recovering from that loss to the greatest extent possible.

Corporate entities in a capitalistic economy are concerned primarily with profits, and only those actions which in some way enhance the ability of the organization to generate those profits are likely to be supported. Don't forget this simple maxim when you contemplate what actions to take following a system compromise. The urge for vengeance may be strong, but if it doesn't make sense fiscally, it probably ain't gonna happen (unless of course you own the company, but that's rather rare for a computer security manager. If you are the owner/CEO of the company, you can skip all this philosophical stuff and go straight to implementation. The rest of us will catch up to you there).

In summary, if you want to snag the dude (or, more importantly, if you want anything done to the dude once he's collared), you need to call the cops. Of course, your insurance company might be also interested in documentation of the incident, as might any of a number of other departments, divisions, task forces, and interest groups in some way connected with your organization. However, if do you call the cops, be prepared to give them something to work with.
Thinking like a Cop

Now let's switch roles. You're a detective on a metropolitan police force. In college you majored in accounting and minored in criminal justice. You work mainly on white-collar crimes: bank fraud, embezzlement, stuff like that. The Lieutenant calls you into his office.

"Sit down," he says, "The department's been taking a lot of heat lately because we don't have a dedicated cybercrimes squad. As of today, you are that squad."

You look at him blankly.

"Next week you take a beginning course in Unix, then the week after that one on computer security."

You stand up to leave.

"Oh, and here's your first case...some hacker broke into XYZ Company last night. Get on it."

Sound like a script from a bad cop show? Nope, it's real life. This is more or less the way a lot of computer crimes detectives got their start. Is it the best way to generate cybercops? Maybe not, but often it's the most expedient from a police administrator's viewpoint. In writing about the problem of producing cops that know computers well enough to understand cybercrimes, I coined the phrase "[If you need something that barks and flies,] It's a lot easier to train a parrot to bark than a dog to fly." My point in employing this somewhat labored metaphor is that police work, for all its complexity, is much easier to pick up than the extremely esoteric knowledge needed to plumb the depths of buffer overflows, IP address spoofing, and man-in-the-middle attacks. Most really successful computer security experts have spent years sitting at consoles, hacking away at operating system kernels and coding nifty little utilities for this problem or that. They just can't teach that during the three months or so at the Police Academy.

Be that as it may, most agencies have been forced by budgetary or administrative circumstances to assign minimally computer-savvy investigators to their computer crimes squads. Most officers will therefore be somewhat at a disadvantage if you expect them to come in and know exactly what steps to take to secure evidence from your specific machines and network. Having a cooperative, extremely knowledgeable company representative such as a systems administrator working closely with the investigator is really essential for maximizing the efficiency of the data-gathering phase and minimizing the downtime of involved systems.

Imagine yourself in the cop's shoes, and provide support for the investigation accordingly. Peace officers are public servants, paid from tax revenues, so it makes sense from both a fiscal and a logistical point of view to make things as easy for them as you can. Your goal as the complainant should be to facilitate the investigation; the hardest task the officer should face is tracking and arresting the criminal, not getting access to and gathering usable evidence from the crime scene.
Law Enforcement's Role in Computer Security Incident Handling

Computer crimes are just that: crimes. Violations of existing law. Conceptually they differ little from any of the other so-called "white-collar" crimes, except that they frequently involve perpetrators who have no physical presence at or even near the crime scene. The usual physical evidence relied upon by forensics analysts, such as fingerprints, footprints, tire marks, signs of forced entry, traces of DNA or bodily fluids, and so on, is conspicuously absent when the crime was carried out from tens, hundreds, or even thousands of miles away.

The task of any investigator is to collect as much evidence as can be found at the scene, analyze that evidence for clues to the perpetrator's identity, and then follow up on leads generated by this analysis. When no direct physical evidence exists, inferential evidence, or evidence that some aspect of the system has been modified as a direct result of the intrusion, is the primary source of clues.

Just as in the case of physical breakins, however, the exact nature and positioning of evidence can be crucial to unraveling the chain of events. Time stamps in logs, records of network activity, new directories and files created by the attacker, incoming/outgoing mail or other packets during the period when the intruder was actively exploiting the system; all of these are important pieces of the overall puzzle. It is important to remember that any change made to the system prior to the arrival of the investigator(s) may obscure or even erase vital forensic information. Under most circumstances, the best thing you can do is to take the box off the network and leave it alone.
What's in it for Them?

Why should law enforcement care about your breakin? The answer to this question may seem obvious (that's what they're paid to do), but consider this: police departments generally get their funding based on the number of cases they handle, and often on the number of cases they successfully prosecute. Some agencies have a minimum loss/damage dollar value below which the prosecuting attorney's office won't bother to pursue a conviction. There are simply too many crimes and not enough resources to devote the same level of effort to each one. This is just a fact of life in any society without unlimited manpower and money (and if you know of one that does not belong in this category, please tell me about it).

The primary benefits of involving law enforcement are twofold:

1. You get legal documentation of the event and of your response to it;
2. You initiate a process that may benefit not only your organization, but others who have been or will be hit by this same perpetrator.

The police, on the other hand, look for cases where evidence of sufficient quantity and quality exists that there is a reasonable chance of finding and prosecuting the perpetrator, and for documentable loss that meets or exceeds their mandated minimum value. If you can provide the 'raw materials' they need to justify their involvement, they're a lot more likely to accept your case and pursue it with the vigor it needs for a successful conclusion. That's not to say that they won't even show up if you don't meet these criteria; I simply suggest that the easier you make it for the investigators, the more likely they'll be able to do the job you ask of them. Common sense is just as useful now (but a lot less common, alas) as it was in Thomas Paine's day.
How They View You

As I have taken pains to point out, the folks that are going to show up at your door in response to a report of criminal computer activity are only human. Just as you have preconceptions about them that may or may not change based on your mutual interaction, so they have them about you.

Of course, the nature of any such preconceptions may vary widely by geographical, occupational, or operational identity, as well as (and probably most importantly) according to previous encounters experienced by the investigator(s). If you're a Computer Security Manager at XYZ Corporation and Detective Smith had a very difficult time dealing with your predecessor, or even with your counterpart at a rival company across town, chances are he's not going to be looking forward to your investigation. That doesn't mean he won't be pleasant, or that he won't do a good job--just that he will have his defenses up during at least your first meeting.

You can go a long way towards ensuring a smooth cooperative effort by being professional, cordial, and respectful. Despite what seems to be the prevailing attitude on the 'net these days, most cops aren't out to get you unless you're a criminal. They are professionals, just like you, and appreciate being treated that way. The Golden Rule hasn't lost any of its relevancy.
When to Report, How to Report

As I hope I've established by now, you will have to make the call whether or not to report the incident. If you choose to report, make certain that this decision has been approved and is supported by senior management, or else prepare to get broadsided. CIOs, CEOs, and other three letter executive types don't like to be the last to know about anything that concerns their company, especially where governmental agency involvement is concerned. Any litigation or media coverage resulting from an event needs to be handled by the legal and public relations folks, respectively; to be effective at their jobs, they'll also need as much heads up as you can provide. Dealing with a computer intrusion is really no different than dealing with a physical breakin, with the same considerations and pitfalls. The crime scene needs to be secured as quickly and as tightly as possible, all evidence should be preserved intact, and everyone not directly involved with the investigation should be kept out.

In a complex network environment containing multiple levels of trusted hosts and shared file systems, just finding all the "prints" left by an intruder can be a daunting task. The more familiar you are with and the better documented is your existing system, the easier it will be to determine what, if anything, was modified, deleted, or installed by the attacker. This information is vital, for several reasons. For one thing, it is necessary for making anything like an accurate estimate of monetary damage resulting from the attack. Secondly, the more complete your knowledge of the state of the system, the simpler the task of restoring it to an identical condition (from those copious backups you'd better have made) becomes. Additionally, if you expect to reconstruct a crime in order to understand it, you have to know what the place looked like before the crime was committed.

Much of what follows is going to be necessarily US-centric (because that's where I live), but the general concepts should be extendable more or less intact to any nation where computer crime is likely to surface. Laws and procedures vary, of course, but the basic precepts for investigating and prosecuting crimes are remarkably similar throughout much of the world, because people are people and computers are computers, no matter where they happen to call home.

There are at least six distinct U.S. federal agencies that have jurisdiction over some type of Internet-related crime: The Federal Bureau of Investigation (FBI), the Secret Service, the Customs Service, the Bureau of Alcohol, Tobacco, and Firearms (BATF), the Federal Trade Commission (FTC), and the Securities and Exchange Commission (SEC). According to the publication "How to Report Internet-Related Crime," a product of the Computer Crime and Intellectual Property Section (CCIPS) of the U. S. Dept. of Justice, computer intrusions should be reported to either your local FBI office, the National Infrastructure Protection Center (NIPC) at (202) 324-0303, or your local Secret Service office. Depending on your circumstances, you may wish to involve local law enforcement authorities as well, although chances are good that the ultimate responsibility for the investigation will end up at the state or federal level, since a great many intrusions cross multiple political boundaries.

One of the best ways to ensure that your interactions with law enforcement will be of optimal benefit to both sides is to establish a rapport with the people responsible for computer crimes in your local area before any crimes are committed. Talk with them--find out what they would like to see from you in the event of an incident, and get their take on the proper way to collect and preserve evidence. After all, they're the ones who will have to make use of that evidence in both tracking and prosecuting the perpetrator(s).
The Pros and Cons of Involving Law Enforcement

Deciding whether or not to report can be a complex issue in itself; there are many aspects to consider. Some of the questions that you might want to ask yourself are:

1. How much loss was suffered (and how easy will it be to quantify)?
2. How long ago did the intrusion take place (i.e., how "warm" is the trail)?
3. Do you have complete and unaltered copies of all relevant logs?
4. Is your firm willing to pursue the matter, understanding that the costs may not be insignificant (salaries, backup media, downtime, court appearances, etc.)?

An additional consideration should be that if any of the logs or files needed as evidence contain proprietary or otherwise sensitive information, that information may become a matter of public record during the course of the trial.

One last note: for better or worse, some companies will avoid pursuing an investigation because they have something to hide (or think they do). If your senior management has been involved in any activity that they feel might appear to be incriminating, they may forbid you to bring in law enforcement with little or no explanation. There isn't much you can do about this; you must remember that as a computer security person you usually don't own the data you're protecting. It is management's call, and you will probably have no choice but to go along with whatever they decide.
Following Chains of Command

Any involvement of an outside agency, particularly of the law enforcement variety, is something that most companies control very tightly. Few things will get you in hot water faster than calling in the cops without following the proper chain of command. Any decision to involve an outside organization in the affairs of the company must be reviewed, approved, and supported by senior management. This is doubly true when that organization is governmental in nature, and triply so when it is law enforcement. As I've indicated above, some companies will have to weigh the potential benefits of bringing in law enforcement with the potential risks of having something uncovered they'd rather keep as a company secret. This is not limited to 'hanky-panky;' often proprietary or otherwise business-sensitive information is brought under public scrutiny at a trial. It may even be a strategy of the defense to subpoena information which the company may not want revealed, simply to throw a monkey wrench into the works and cause management to reconsider its commitment to pursuing prosecution. In this instance, as in all others, be certain to CYA.
Collecting Admissible Evidence

To have any chance at all of obtaining a conviction once a cracker is caught, the prosecution will need evidence that is admissible in court. The details of what can and cannot be admitted into a court of law are complex, and vary from country to country; they are outside the scope of this discussion. For our purposes, only a few general guidelines need to be mentioned.

The principal evidence you will probably have will be in the form of logs. It is critically important that you pay heed to the wording of the rules in force in your country governing the use of logs in a trial. For example, U.S. Code Title 28, Section 1732 (28 USC 1732) dictates that copies of logs are admissible, so long as the original logs were made "in the regular course of business ." In a related vein, Rule 803(6) of the US Federal Rules of Evidence states that logs (which might otherwise be considered 'hearsay') are admissible so long as they are "kept in the course of a regularly conducted business activity." This means that you'd be much safer to log everything all the time and deal with the storage issues, rather than try to turn on logging only after a breakin is suspected. Not only is this a bit like closing the barn door after the horse has fled, it may render your logs inadmissible in court.

Any physical object involved in the investigation, be it disk, tape, CPU, CD-ROM, keyboard, right down to the power cord, must be handled in strict accordance with Chain of Custody rules. Essentially this means that all items must be tagged, stored in sealed containers, and the identity of every person who has handled or been responsible for them since they were collected as evidence, along with the date and time, recorded on the label of the container. They must never be left alone in an unsecured location, or otherwise placed in any circumstance where tampering by unauthorized persons is likely to occur. This may seem like a bit much to ask in some circumstances, where many things are happening at once and it is easy to lose track of where things are and who has them. However, a reasonably sharp defense attorney will be quick to pounce on violations of chain of custody rules; if the evidence that is rendered inadmissible by this action is essential to the prosecution of the case, you are SOL. Always err on the side of being too safe and too careful when it comes to evidentiary procedures.
Cyber Crime and the Courts

The interpretation of new laws by the courts is an ongoing and highly dynamic process. Cyberindustry and its attendant cybercrime has, relatively speaking, only recently leapt out from behind a rock and said 'boo' to the judicial system, so crafting, implementation, and final interpretation of computer crime-related legislation is really only in its fractious infancy. It is unlikely that any consistent patterns will emerge until each of the broad areas of legislation has been dragged through the courts (especially the appellate process) a few times.

Meanwhile, it would be prudent to keep abreast of cases being heard and familiarize yourself with the decisions and rationales being issued on an increasingly frequent basis. With the geometrically expanding influence and pervasiveness of the Internet-based economy, every decision that comes out of a computer-related trial is going to carry a great deal of weight, at least until legislation begins to keep pace with the technology. I won't even begin to predict where the legal landscape will stabilize regarding computer crime; you'd have better luck predicting the next lottery winner.

If you figure out how to predict the next lottery winner, however, drop me a note. Maybe we can work something out.

Robert G. Ferrell, CISSP, is the Information Systems Security Officer for the National Business Center of the U.S. Dept. of the Interior. He is also active as a Perl Monger, an Internet Technologist, and a member of the Net Wits. He has been involved with (primarily Unix) systems programming, administration, and security on and off since 1977.

Source : http://securityfocus.com

Read More......

Intelligence Preparation of The Battlefield

Introduction

"Intelligence Preparation of the Battlefield" is a term used in the military that defines the methodology employed to reduce uncertainties concerning the enemy, environment, and terrain for all types of operations. It is a continuous process that is used throughout all planned and executed operations. The networked environment which security professionals are tasked with securing is analogous to a battlefield. The myriad of attackers and intruders from the void are the aggressors constantly on the offense. The security professionals are the defenders, entrusted to preserve the confidentiality and integrity of data against these marauders.

Recent efforts focused on assessment of critical systems and infrastructures have turned-up a recurring theme. Specifically, that many system and security administrators are unaware of the level of effort that a determined attacker who is well financed and supported will expend towards successful penetration of a target system or site. Most assume that the major threat will come from "script kiddies", and others, who are simply looking for a soft target, and who will move on to easier targets if the initial attempt at compromise is unsuccessful. While this assumption may be true, consideration should also be given to the concept that an attack may be planned and coordinated to a high degree with the specific intent of breaching the target system no matter the cost or effort required.

Security professionals are expected to have a high level of technical competence, and for the most part this is true. However, these same professionals oft times do not expect the same to be true of those attackers and intruders from whom they defend their sites. Many do not take heed of the axiom that "There's always someone out there smarter, more knowledgeable, or better-equipped than you."
Setting The Scenario

Let's assume that the opposition is well financed and supported, and that their technical expertise is on par with that associated with experienced and well-seasoned security administrators. How might this individual, or possibly attack cell, prepare for a successful penetration of a target system? What are the objectives, methodology, techniques and tools utilized? The following seeks to address the above questions, and extend to those tasked with security related responsibilities an appreciation for the extent and level of effort that, in some cases, may be directed against systems for which they are responsible. It can also serve as a template for an assessment conducted as a preemptive security measure.
The First Steps

The attacker will begin by defining an end-state with regard to the targeted site or systems. This end-state is a clearly defined and obtainable objective. Detailed concepts for courses of action will be formulated and the chosen course of action will concentrate overwhelming "force of effort" at the critical service or vulnerability at the appropriate time and place to achieve the desired effect. Desired results may be denial of service, acquisition of sensitive corporate data, or to establish and maintain access for recurring clandestine access.

Preparation for a successful attack embodies a systematic approach to exploitation. Such an approach fosters effective analysis by enhancing application of professional knowledge, logic and judgment. The attacker will seek to identify and define problems associated with breaching the target defenses, gather facts and make assumptions, develop possible courses of action, and analyze each course of action through 'wargaming". Finally, the attacker will choose the best solution available based on the defined end-state and implement the attack.
Estimate of the Situation

In order to develop a coherent strategy, the attacker will complete a thorough estimate of the situation. He will seek to gain a deeper understanding of the task at hand. A review of known facts and information will be conducted. Specific tasks that must be accomplished will be drawn up, and from this task list a reduced essential task list will be constructed. A determination of all constraints and limitations which may influence task accomplishment will be made. How much time is available, location restrictions - can the target system be accessed from the attacker's current location if outside the physical borders of the country the target is located in, or must he move to closer proximity etc. - the materials required in terms of software and hardware, and the associated cost. The attacker will also determine the acceptable risk. Can he afford to be logged during scanning, is compromise acceptable during the latter stages of the attack, is concealment of the originating attack location necessary, and what about exposure of the sponsor if he is working on the behalf of another entity? Finally, any critical facts and assumptions not covered previously will be addressed, and a continuous time analysis maintained until the attack is complete.
Intelligence Preparation of the Battlefield

How will the attacker accomplish the tasks that have been outlined? By laying out a focused plan for acquisition of critical information required for successful penetration of the target system. The following methodology is an example. Most, if not all, of these steps will be executed:

* Define the Network Environment
* FootPrinting
* Scanning
* Enumeration
* Vulnerability Mapping
* Attack Strategy Development & Wargaming
* Refinement & Implementation of the Attack

Define The Network Environment

Defining the network environment involves footprinting, scanning, and enumeration. FootPrinting allows the attacker to limit the scope of his activities to those systems that are potentially the most lucrative from an vulnerability perspective. Scanning will tell the attacker what ports are open, and services that are running. Enumeration is the extraction of valid account information and exported resources.
FootPrinting

During the footprinting subset of defining the network environment, the attacker's objective is to gather the following information:

* Name and IP of select systems
* Hardware and operating system (including version/build) of the system
* Services available on the system
* Physical location of the system(s)
* Information on individuals associated with the system(s); name, phone #, position, address, knowledgeability etc.
* Build a simple network map for the domain, including connectivity provider and key systems
* Develop any information that may make it easier to conduct "social engineering"

The methodology to accomplish footprinting of the target will involve non-intrusive and stand-off methods. The attacker wants to determine the type of network with which he is dealing, and with whom; system, network, and security administrators. His tactics and techniques will usually involve the following:

* Check for a website associated with the target. Many websites provide a revealing amount of information that can be used in the attack. Other items of interest include: related companies or entities, merger or acquisition news, phone numbers, contact names and email addresses, privacy or security policies indicating the type of security mechanisms in place, links to to other web servers related to the organization.
* Gather information that could be used for social engineering, identity of network systems, system administrators etc. USENET and WEB searches on the system administrators and technical contacts that are found when running host queries. By taking the time to run down this information, the attacker may be able to gain greater insight into the target network. He will also try the system administrator's address on any other machines, if found, when running the host query. Perhaps the system administrator favors one certain machine which can be more readily exploited.

Tools and procedures used to accomplish the task of footprinting:

* Conduct Open Source information gathering on USENET, search engines, Edgar database etc.
* Execute a whois query using the following:

http://www.networksolutions.com/ - whois web interface
http://www.arin.net/ - whois ARIN Whois
http://whois.ripe.net/ - European Whois
http://whois.apnic.net/ - Asia Pacific IP Address Allocations
http://whois.nic.mil/ - US Military
http://whois.nic.gov/ - US Government

or use the native UNIX whois from the command line:

whois |more
whois to gather information on SYSADMIN etc.

Again, the intent is to develop a network map using information gathered during footprinting. The attacker will also want to know who the target gets their upline Internet access from. In the event that he cannot exploit the specified target, he may be able to step back one hop to the service provider for the target and gain access from that vantage point. Additionally, he will figure out which systems are routers and firewalls and place them on the map, as well as identifying key systems such as mail servers, domain name servers, file servers etc.
Scanning & Enumeration

At this point the attacker has a good idea of the machines on the network, their operating systems, who the system administrators are, and any discussions by them as to the topology, policies, management, and administration of their systems posted to newsgroups and other public lists. He also knows that from this point forward everything he does may be logged, and at a minimum will assume it is.

The attacker is now ready to move on to actual reconnaissance of the target, scanning and enumeration. His objectives after the initial assessment of the target system(s) focuses on identifying listening services and open ports. Once promising avenues of entry are identified, more intrusive probing can begin as valid user accounts and poorly protected resource shares are enumerated. The techniques, tools and procedures will vary according to his level of expertise and ability to code custom scripts and programs. Regardless, there is a plethora of open source tools available for use, and he will more than likely make use of some, if not all of the following: NMAP, STROBE, NESSUS and SATAN variants SARA and SAINT if using Linux; WinScan, Sam Spade and others if using a Windows box. Do not discount the fact that commercial products such as CyberCop Scanner and Internet Security Scanner may be used also, as these are available for sale on the open market.

The attacker knows that there is really no good time to ever implement a scan, and that once the decision is made to execute the scan, that it should be done only once. He knows that he may get only one chance, and that another opportunity may not be presented as running a scanner is the equivalent of running up to an occupied building with a crowbar in broad daylight and trying all the doors and windows. He will avoid these types of scans to the maximum extent possible.

The attacker will also make use of tools available as part of the operating system originating the scan and enumeration such as the following for Unix systems:

* host -l -a |more
* nslookup -query=HINFO
* dig
* dig -x Do a reverse dig on a couple of systems found when running the host command to see if they are properly reversed mapped
* dig@ version.bind chaos txt |more (Used to find out if a vulnerable version of "bind" is being run on each of the domain name servers.)
* rpcinfo -p (Used to identify vulnerable or unnecessary RPC services like SPRAYD, STATD, BIOD and WALLD)

Vulnerability Mapping

Once the preceding has been accomplished, the attacker will study and analyze all the information that has been collected. Vulnerability mapping is conducted to match specific exploits to the target systems found during the previous stages. Public sources such as BugTraq and CERT advisories are consulted, public exploit code is reviewed, as well as the output from scanners such as CyberCop, Nesssus and SAINT. If he is not intimately familiar with the operating systems in use, additional study will be conducted prior to gathering the tools required for actually breaching the target.

As a last step to vulnerability mapping, the attacker will gather potential tools for use against the system(s) based on the analysis of the services running, operating system, and other variables. Additionally, evaluation of selected tools to determine what areas they cover is conducted to identify any gaps that may exist in the required capabilities.
Wargaming

The attacker now moves into the final stage before actually conducting the attack, "Attack Strategy Development & Wargaming". The attacker will develop multiple courses of action (COA) and wargame them, selecting the best COA based on all available information. The plan of attack will depend on what is to be accomplished; compromise of security, access, denial of service etc. The attacker will conduct rehearsals, laying out how the attack will be accomplished and working through the exploitation process at least mentally. If possible, he will establish a single machine with the identical distribution as the target and run a series of attacks against it. The intent here is to identify what the attacks are going to look like from the attacking side, and what the attacks will look like from the victim's side. He will also consider the following influencing factors:

* How stealthy does he need to be?
* Does he need root level access to attain his goals?
* Does he want to attain access to other machines? (Deploy sniffers, get passwd files etc.)
* Which exploits are most likely to succeed?
* Will he want to maintain access to the target system, or is this a one crack deal?

The attacker will seek to be totally prepared before any exploits are run. He will not want to be in the position of acquiring access, and then realize that he does not have a log wiper or a sniffer that is required to further his aims. He will also be prepared with strategic backup plans. For example, if the target system doesn't have a compiler, and he needs to compile tools on the system, he will have a compatible compiler ready to FTP to the target site; or have tools pre-compiled for the target operating system. He will adhere to the maxim "FAILING TO PREPARE IS PREPARING TO FAIL!!"
Attack Implementation

Once all is in preparedness, and at the appropriate time based on reconnaissance and analysis of all data, the attack will be initiated. The objectives are to gain access and to subsequently achieve any of the following: escalate privileges, pilfering, create backdoors, covering tracks, and if all else fails and the attacker cannot achieve his goals, possible denial of service attacks. The attacker will execute the identified exploit in an attempt to gain access. If access is gained, and no system administrators are on the system, and if only user level access was gained in the last step, an attempt is now made to gain control of the system through ROOT/ADMINISTRATOR privileges. This can be conducted using password cracking tools and exploits such as Crack 5.0a, L0PHTCrack, rdist, getadmin, sechole , and buffer overflow exploits etc. Onsite system tools will be used as well as tools imported to system.

Assuming ROOT/ADMINISTRATOR privileges have been gained, the attacker will seek to identify mechanisms to access "Trusted Systems" by evaluating trusts, and searching for cleartext passwords etc. Tools and techniques used can include searching for .rhosts files in users home directories and elsewhere, gathering user data, and examining system configuration files.

Once ownership of the target is accomplished, this fact needs to be hidden from the system administrator. For a Unix based system, the attacker will unset the history file, and execute a log wiper to clean entries from UTMP, WTMP, and Lastlog. For Windows based systems, event log and registry entries will be cleared/cleaned.

If the attacker wants to maintain access to the system after initial access is achieved, he will set about creating backdoors for future access. The methodology, tools and techniques are system dependent, but the intent is to create accounts, schedule batch/cron jobs, infect startup files, enable remote control services/software, replace legitimate applications and services with trojans. Possible tools include: netcat, VNC, keystroke loggers, adding items to the Windows startup folder or configuration files (system.ini, win.ini, autoexec.bat, config.sys etc.) For UNIX based systems, entries in the /etc/rc.d directory can be employed.

If all else fails, of if the desired intent is to implement a denial of service (DoS) attack, the intruder will use exploit code to disable target. This is system/operating system specific and can also depend upon the "patch level" of the system state. SYN flood, ICMP techniques, overlapping fragments/offset bugs, and out of bounds options can be employed. Again, the effect will depend in large part on the system state. Has the system administrator installed the current security package and updated the system files to preclude the implementation of the Ping of Death, Smurf, Fraggle, teardrop, boink, and newtear exploits? The attacker knows that once exploits become public, they can quickly become useless against systems where the system administrators are on top of things, but he also knows that new exploits are found daily and that research and experimentation is required to find the most effective tool and technique.
Post Attack Review

Whether or not the attack was successful, the attacker will conduct an extensive review of his efforts. The intent is to identify what worked and what did not and why. What methodologies were successfully employed, what tools and techniques were most effective and why? This information is paramount if the attacker has to step back through any of the preceding steps along the way to accomplish his intended objective, and for use against future targets.
Conclusion

Finally, the dedicated attack is not the work of a "script kiddie", or casual system intruder. The opponent that system and security administrators face in this instance is a professional antagonist whose skills may match or exceed their own. As Seth Ross notes in his book Unix System Security Tools: "There are no Turnkey Security Solutions. If computer security is a game, then the enemy makes the rules".

Whether working for himself or some other sponsor, we can be sure that the dedicated attacker will adhere to the following:

"There is no way to become either a master system administrator or a master cracker overnight. The hard truth is this: You may spend weeks studying source code, vulnerabilities, a particular operating system, or other information before you truly understand the nature of an attack and what can be culled from it. Those are the breaks. There is no substitute for experience, nor is there a substitute for perseverance or patience. If you lack any of these attributes, forget it!! " (Maximum Security, A Hacker's Guide to Protecting Your Internet Site and Network by Anonymous)

We would be wise to heed these words as well...

Doug Fordham is a former Department of Defense, Information Systems Security Project Manager whose responsiblities included computer network defense, security auditing, and vulnerability testing.


Source : http://securityfocus.com

Read More......

Wednesday, September 5, 2007

Starting from Scratch: Formatting and Reinstalling after a Security Incident

Missing files, corrupt data, sluggish performance, programs not working - any of these things could indicate a breach in network security. Once the breach has been identified and mitigated, the painful process of rebuilding and recovery begins. There is a point you reach in the recovery process, after you have done a little digging, put a finger on what might have gone wrong, where you come to the proverbial "fork in the road". Every security professional or systems administrator has faced the decision at some point in his or her career: is it better to try to repair the damage, or just reinstall the system and start from scratch?

This IT dilemma will plague us all at some point. In this article, we will examine the process of starting over, and more specifically, reinstalling as the result of a security incident. We will focus on the steps necessary to prevent a repeat intrusion, get your system back online and ensure a rapid response in the future should this happen again. Needless to say, these steps should be planned in advance of any security incident and should be included in the organization's incident response policy.

Why me?

Before we get into the specifics, let's consider how we have reached this unfortunate point. Obviously, there has been a security incident. An intruder has likely breached and manipulated your machine in some manner. So why not fix the problem? Patch the system, clean up the changes and put it back out there. For any particular exploit, even if a well-documented clean-up procedure is in place, it's hard to ensure that modifications outside of the known scope weren't made. Worms, viruses and rootkits can wreak havoc on any system. They often remove crucial files, embed themselves in other parts of the system and sometimes remain silent. And they can be modified to do other nasty things ? making the documented clean-up routines, released after a major incident, obsolete.

The reality, as any incident response expert can attest to, is that discovering all of the changes made to a cracked system is extremely difficult. Once inside a system, an attacker can implement several backdoors, modify standard system operations (such as logging) and hide files. Unless there is a file integrity checker such as Tripwire in place, it's virtually impossible to guarantee a clean system. And there's no worse feeling then spending hours rebuilding a system, only to have it cracked shortly after putting it back up.

Repairing a compromised system is, without a doubt, one of the most challenging aspects of a security professional's job. While it might seem like the easy way out, wiping a system and installing clean versions of the original software is often the smart choice.

Preparation

Before beginning the rebuilding task, there are a few steps to take that will ease the process. First, consider your response to the cracked system. Obviously, the immediate concern is getting the system back to normal. But in the near future, you might need to investigate the box further, learn how the cracker got in, or perhaps turn the evidence over to law enforcement. With that in mind, consider using a ghosting (disk image copier) or disk duplicating program to dump the contents of the system to another hard drive or storage medium. With the duplicate image set aside, you can immediately get to work on restoring the machine without tainting evidence, and focus on the incident analysis later. A raw system image is a requirement for any type of official incident analysis, so this important step is recommended. If you modify or examine the victimized machine in any way, the data will likely be considered invalid by authorities due to the numerous aspects of the system which can be compromised. Much like a physical crime scene, this digital evidence needs to be documented, preserved and protected from contamination. Disk imaging software, such as Ghost, provide incident handlers and forensics experts with the clean slate they need to begin an investigation. (For more information on dealing with law enforcement agencies in forensic investigations, please see the SecurityFocus article Incident Management with Law Enforcement by Ron Mendel.)

Next, you need to audit the system. Take note of the servers and services running, important configuration files, patches, the third party applications in place, the users, the directory structures, and so on. Additionally, consider saving especially critical files, but be warned that they could have been manipulated or corrupted at some point. Obviously, you'll want to avoid capturing the malicious changes to the system, but you should be able to cull the basics with a general review.

Lastly, have all of the original installation disks, registration codes and support numbers at hand. It's best to have these in place before the process begins, so you aren't frantically digging around for a disk or number in the middle of the setup. In addition to this software repository, keep a journal of each step you take. This record will help track the rebuilding process. Additionally, it might prove to be a handy reference should you need to rebuild the system in the future.

Formatting the Drive

The big step in rebuilding a system, the point of no return, is wiping or formatting the system drives. This will destroy all of the data on the disk and make it possible to reinstall clean, system software.

You might wonder if it's possible to repair or upgrade a system, a choice available for many operating systems. If you repair a system, a process which normally requires an emergency or repair disk, the OS cleanses itself by replacing or reinstalling critical system files or missing applications. The problem with this lies in the fact that while the repair option might catch some modified or missing files, it likely will not recognize what was added to the system. Therefore any backdoors, extra applications or otherwise malicious code will remain in place, undetected. So a complete reinstall including a disk format is the safer choice when dealing with a compromised machine.

Formatting the drive is, today, a relatively simple process. Most modern operating systems simply require you to insert the installation boot disk. Shortly into the process, you are presented with a list of drives and installed OS's. You'll likely want to select all of the drives for formatting. The partitioning (disk spacing) can be handled automatically, but if you have specific requirements, mimic the previous configuration. In the past, formatting a drive was somewhat more tedious and a mysterious process left up to the user. It required a bootable system disk and a program such as 'fdisk'. If you must use this method, boot from the necessary disk and use the utility to wipe the drive clean.

Another option, if you want more control or assistance with this process, is a third party utility such as 'Partition Magic'. Such software makes it easy to resize existing partitions and format drives in a number of different formats. Consider similar utilities if you encounter problems with the OS formatting process described above.

Rebuilding the Systems

With an empty system in front of you, the next step is to reinstall the OS software. This straightforward process will vary depending on your software. Follow the installation guidelines provided to build the bare-bones system. After the OS, move onto installing specific applications, such as servers, utilities and other programs you require. Again, the process is different for each application, but there shouldn't be any unexpected challenges. If possible, and ONLY if you know they were untouched, reinstall the critical configuration and system files copied from the compromised machine. Or, at the very least, review them while configuring the current setup.

By this point, you should have a decent replication of the original system, but keep in mind - it is still offline. Before reconnecting to the network, security needs to be tightened, or we will end up back where we started. Begin by removing any unnecessary open ports or network services. Use a portscanner such as Nmap to determine what servers are listening. Turn off everything but the absolute essentials. Next, review the running applications. Again, if something seems unnecessary, remove it. We want this system to be spartan in terms of processes - each one is a potential vulnerability. Bring the OS and application level patches up to date. These patches, often security related, are available from the vendor sites. It's a good idea to group the patches onto a disk before beginning the rebuild. Therefore, you won't have to put the system online while it's still insecure. Additionally, take note of every patch applied, for future reference.

A vulnerability scanner, such as Nessus, is a good utility to employ during this process. These tools check the system against a known database of vulnerabilities and generate a report of potential threats. Make sure all aspects of the report are addressed before bringing the system up.

Lastly, consider installing an integrity checker (such as Tripwire), which can help in both the short and long terms. Immediately, you'll be concerned with a repeat incident. If you missed the original vulnerability in the rebuilding process and the system is compromised again, an integrity checker will alert you of changes. Long term, the benefits are similar. If the machine is hit again, a quick list of changes will be available. An intrusion detection system, such as Snort, can also help you monitor the network for the attacker's return. Monitoring is a crucial component once a system is back online and both of these utilities can help immensely.

An important point, which deserves repeating, is that the system should, if possible, remain unconnected to any network during the OS reinstallation and patching process. This means that you need to compile all of the necessary software: the OS, specific application and patches, before hand. Rebuilding a machine without network connectivity can be done on some operating systems, but is somewhat difficult on others. If circumstances demand network connectivity, proceed with caution. Make sure ALL listening services are shutdown prior to connection. Additionally, the machine should be placed behind a firewall which blocks inbound traffic requests. Lastly, it should be the only machine on the particular network segment, to prevent an internal virus or worm from reaching the machine. An unpatched machine is extremely vulnerable to multiple threats, so make sure the proper defense techniques are in place before putting the machine on a network for updates.

Going Back Online

Before bringing the system up, you need to create a system backup. Since you just rebuilt the machine from scratch, it's fair to say that a backup was not in place prior to the compromise. Backups are a fundamental aspect of system administration and security. At some point, they will be needed. In addition to the full backup, you need to create a regular schedule for incremental backups. This will help ensure that frequently modified files are saved to a secure medium.

Finally, we can bring the system back online. The fresh build, newly applied patches and security review should prevent an attacker from returning. If the machine is compromised again, it's safe to say you missed the original vulnerability or have fallen prey to an insider attack.

For a while, monitor the system with increased frequency. Review logs, security mechanisms such as filecheckers and intrusion detection systems, and general system activity on a regular and frequent basis. You need to ensure that the machine is no longer vulnerable. Unfortunately, this is invariably a wait-and-see process.

Conclusion

Rebuilding a system is never a pleasant task. It is, however, often the proper choice, when dealing with compromised machines. Sometimes, it's the fastest route to restoring the status quo. The process demonstrates how important regular backups and strict security procedures are for networks. When you do need to start over, the basic steps outlined in this article can ensure a rapid return to action and prevention of further incidents.

Matt Tanase is President of Qaddisin. He and his company provide nationwide security consulting services. Additionally, he maintains The Security Blog and the Wifi Security Project, Web logs dedicated to network security.

Source : http://securityfocus.com

Read More......

Sunday, September 2, 2007

A Method for Forensic Previews

1. A Classic scene from the information security professional's work life

One of your systems administrators pokes his head in your office door. "The print spooler machine may have been compromised. Can you help me take a look? Some odd files have appeared -- that's all we know right now." Your pulse steps up a few beats: you told Operations on more than one occasion that they should address the availability issues faced by critical servers. The print spooler was one of those servers. If it is hacked, it will have to be taken out of production, and there will be serious consequences due to the service interruption. At least you have documented your interactions with Operations: email is forever, you tell yourself. With that thought, you ponder your options to get the organization through this as painlessly and quickly as you can. There is no backup machine, and obtaining a bit-for-bit copy of the spooler's file space is not practical without taking the machine off line. Since there is no solid evidence that the spooler is hacked, it makes sense to do some reconnoitering before taking the machine out of production for extensive forensics. The things you would like to look at include process and network activity, the status of significant binaries, user and group accounts present, the permissions these accounts have, and so on. But how to proceed with this forensic "preview" of the spooler? You do not wish to damage original evidence, and if the spooler is not hacked there is nothing to worry about. On the other hand, what if it is hacked?
2. The preview process

During any computer forensics operation, the state of the target machine must be left as undisturbed as possible. This underlying principle applies to all forensics activities, ranging from the field preview to the full blown examination in a lab. Nevertheless, there remains an important distinction between a preview operation and lab work: by its nature, the preview is very likely to contaminate original evidence. Examinations in an evidence preservation lab use backup copies of evidence, thereby preserving the initial state of crime scene equipment. Why, then, would an investigator undertake a preview operation? There is often no choice, as the opening scenario demonstrates. But perhaps previews are not that far out of line. After all, risking damage to the original evidence is something an investigator faces during the initial steps of most forensics work. Some level of interaction with the crime scene computer is normally required to obtain a backup for later processing. This issue may even be exacerbated when the crime scene computer is something other than a workstation (such as a mainframe), in which case, significant interaction may be required to backup any evidence.

Where computer forensics is concerned, the idea of less is more carries great weight. The less an investigator has to do to interact with and extract information from evidence (or what may become evidence), the better. In the case of the preview, the goal is to determine whether or not a given target machine has been compromised by some unauthorized agent. This determination has to be made without seizing the target machine and forensically processing a backup of its file space.

Following the preview, appropriate next steps may be taken if there has been some sort of compromise. For example, if a machine is simply infected with a virus, perhaps running a virus scan will be sufficient; if a machine has been turned into a "warez" site, perhaps removing it from production and putting it through a full forensics examination is in order. [ref 1] Clearly, the outcome will depend on the sensitivity of the data assets involved, the standing policies of the organization, and the professional assessment of the investigator.
3. The Four Step Plan

We have established what a preview is, and why an investigator might undertake such work. Now, we turn our attention to the broad steps that comprise the forensic preview activity:

1. Related research
2. Passive network operations
3. Active network operations
4. Active host operations

As we precede through these steps the investigator's activities become progressively more interactive with the target machine and, hopefully, more revealing of the machine's disposition. Unfortunately, as the preview becomes more interactive, it also becomes more dangerous to the state of evidence. Therefore, it is important that the investigator stops the moment a compromise is evident; continuing on would needlessly risk damaging original evidence. With this approach, it may be possible to determine that a given host has been compromised without, for example, having to directly interact with the operating system looking for a root kit.

Before outlining these steps further, a couple of important guidelines deserve attention:

* Always consider the possible legal ramifications of investigatory activities; consult with your organization's legal counsel in advance of such activities. For example, some of the steps outlined below may constitute a violation of privacy, given the right circumstances

* Document all investigative activities taken. The whole reason to do a forensic preview is to determine, without disruption to production services, whether or not a target machine has been compromised. If it has, the investigator may need to account for the interactions that have taken place as a result of the preview. A compromise does not necessarily translate into a full blown investigation: whether or not a target machine suddenly becomes a crime scene computer is contingent on the type of compromise, organizational policy, and the investigator's judgment. Regardless, all previews are the same in that the target machine could become a crime scene computer. If this happens, the investigator's preview documentation will become the start of a chain of custody [ref 2]

3.1 Step 1: Related Research

In the first step, the investigator uses the process of information discovery to research activities related to the target machine. This is not unlike the process of information discovery described in the Field Guide series of forensic articles on SecurityFocus. [ref 1] Of interest are log data and network flow information made accessible at the enterprise level, including:

* File space monitoring (e.g., logs of unexpected changes to files)
* Intrusion detection system (IDS) activity - network and/or physical
* Firewall activity
* Network flows
* Relevant service/application activity
* Interviews with relevant parties (e.g., system administrators, application administrators and users)

The idea is to find evidence of a compromise without interacting with the target machine on any level. Of course, success will depend on the monitoring in place (and that the logs in question are not stored on the target machine), as well as the quality/quantity of information provided by relevant parties.

If evidence of a compromise is found, the investigator should stop the preview and consider handling the target machine as a crime scene computer. Otherwise, the preview should continue to Step 2.
3.2 Step 2: Passive Network Operations

In this step the investigator uses downstream/inline utilities to observe the target machine's ingress and egress traffic. There are a variety of ways to do this, including network taps, network IDS rules, and span ports on switches. Outside of the use of a span port, sniffing on a switch is not necessarily recommended since it may involve poisoning the ARP cache of the target host (changing the host's state, and perhaps interrupting its services). If the target is on a hub, or is wireless, sniffing becomes a safer choice to implement.

The duration used to monitor traffic depends on the investigator's comfort level with the situation. If the target machine is fulfilling a critical function, or stores highly sensitive data, it may be unreasonable to spend a lot of time in this step.

As in Step 1, if evidence of a compromise is found, the target machine may need to be viewed as a crime scene computer. If nothing of interest turns up, the preview should head to Step 3.
3.3 Step 3: Active Network Operations

By Step 3, the safer, non-interactive means of checking the target machine for compromise have been tried. From here on, the target machine's state will be altered by the activities of the preview. The investigator must minimize these activities to prevent significant harm to potential evidence.

In this step, the two primary tools of interest are port and vulnerability scans.

Port scans will not drastically change the state of a target machine. Nevertheless, the investigator should be aware that a listening service may write out log entries or start and stop processes upon connection establishment. If the target machine is running a network IDS, a port scan may cause a change in network disposition: the scanner could become blocked. The investigator should work with the system administrator to determine what services might interact with a port scan. If there is an IDS or firewall on the target machine, it may be possible to configure the scanner with a trusted address.

Unlike the port scan, vulnerability scans can cause significant changes in the state of a target machine. The degree of change depends on how the scanner is configured, with more robust configurations leading to ham-fisted probes and attacks. The system administrator may be able to help fine tune a vulnerability scan, so as to not unnecessarily disturb a host's state. For example, if the target machine has been patched against vulnerability X, it does not make sense to check for X. One reasonable approach is to tune the vulnerability scanner to check for services commonly deployed by script kiddies and malware. Precise and simplistic scans are best: less time will be needed and fewer changes to the target machine's state will result.

Once again, if evidence of a compromise is discovered, the investigator should decide whether or not the target machine becomes a crime scene computer. If no compelling evidence turns up, the preview should advance to Step 4.
3.4 Step 4: Active Host Operations

Here, we directly interact with the target machine's operating system by way of a user account. The careful notes the investigator has been taking all along will carry even more weight in this step, since the activities herein are all but guaranteed to change the target machine's state. Items of interest include basic facts about the target machine's OS, process information, log file data, account information, and the status of file space.

To begin with, the investigator may wish to change the administrative password on the target machine. So long as this is documented, there's little reason that it would jeopardize any evidentiary value. If there is a compromise, it may be negligent to not take steps that help block an attacker's administrative access -- the investigator should consult with legal counsel in advance of preview activities.

In this step, we are concerned with the following information targets:

1. Basic system information
2. Running processes
3. Timed jobs
4. Log files
5. User and group accounts
6. File space status

Utilities that aid in gathering the above should come from a known, secure source. It is recommended that such programs be run off of read-only media (e.g., CD-R) to manage the risk of using compromised programs on the target machine. However, there is a catch: many utilities are not self-contained and may rely upon the use of libraries and other resources on the target machine. It is impractical to fully avoid this situation; after all, by its very nature the forensic preview interacts with what could become original evidence.

Along these lines, as files are accessed on the target machine, the times and dates of these accesses will overwrite values in the relevant file metadata. This could make it difficult to show or know that an attacker has made similar accesses, and highlights the tradeoff of forensic previews: in exchange for not taking a target machine out of service, there may be some contamination to possible evidence.

Thought must also be given to data capture during the preview. The investigator might use a network agent to transmit and remotely store all information (e.g., cryptcat, SBD). Any such agent should use strong encryption to ensure the integrity and confidentiality of transmitted information. As an alternative, data could be stored locally to a diskette or USB drive. The volume of data collected should be quite small, consisting of the text output of various utilities, along with copies and excerpts of logs.

To proceed through Step 4, a script or program could be used to collect most, if not all, of the information desired. [ref 3]

Item 1: Basic System Information

Here, we need to collect the basic facts about the target machine. While it is unlikely that this will yield evidence of compromise, the information establishes a context and helps to inform the preview.

What to capture:

* Hardware configuration (though, nothing requiring an interruption of service, like rebooting to get into BIOS, and so on)
* Operating System used, including version and patch level
* Network configuration (IP and MAC addresses assigned to all NICS, ARP cache)
* Major applications installed (though, not necessarily running), and, if possible, their patch levels
* Purpose of the target machine

Item 2: Running Processes

Under this item, processes listening for network connections are of primary interest. Open ports should be compared with what the system administrator believes should be open. Noting the services commonly associated with these ports can also be useful: if the target machine is suddenly offering an IRC service there could be reason for concern. Of equal importance are unusual outbound destinations or traffic types (for example, perhaps the target machine is not hosting IRC, but there is traffic seen going to an IRC server).

Processes that are not listening to a network port can be of interest, too (e.g., a sniffer process monitoring all of the network traffic on the target machine).

What to capture:

* A list of all running applications (with as much detail as possible: name, owner, resources consumed, duration of execution, process ID, libraries and files used, etc.), broken down by
o Applications listening for network connections
o Applications not listening for network connections
* A list from the system administrator of the applications that should be running

Item 3: Timed Jobs

A timed job is one that is scheduled to execute at some point in the future, perhaps iteratively. It may be that the scripting used in a timed job has been altered for malicious purposes. Thus, the investigator should be careful to not only find out what jobs exist, but to inspect their related programming.

What to capture:

* A list of all timed jobs, broken down by
o Jobs to be run at the system level
o Jobs to be run at the account level
* Results of reviewing (in whatever capacity is useful) scripting used in timed jobs

Item 4: Log Files

For this item, the investigator should gather system/application alerts and log entries. It is possible for preview activities to end up in the log files under review - notes maintained by the investigator will explain such entries.

The investigator should not overlook host-based firewall and network IDS logs. There may also be tremendous value in reviewing logs that are generated by proprietary applications.

What to capture:

* Important system level messages (such as errors, house keeping, application related messages)
* Account access events (authentication and authorization) at both the system and application levels -- to the extent possible, note the fundamental details
o Who (i.e., account in question)
o What (i.e., type of event)
o When
o Where (i.e., from where did the access originate)
o Why (i.e., what was the perceived purpose of the access)
o How (i.e., through what type of channel did the access happen)
* Important application level messages (e.g., web servers, host firewalls, host intrusion detection systems, etc.)

Item 5: User and Group Accounts

Here, we want to see if there are any unauthorized accounts on the target machine, and whether or not any accounts have been assigned unjustified access permissions.

What to capture:

* A list of all individual and group accounts
* A list of all currently active accounts (for example, who is on the system right now? What are they up to?)
* A list of critical file resources (such as data files, applications, etc.) on the target machine, along with their assigned permissions

Item 6: File Space Status

Last, the investigator should enumerate file permissions (note the overlap with User and Group Accounts above), look for unauthorized file activities, and check for unusually named and hidden files. Doing more than this is not practical from a time perspective, and could cause an undue processing burden. If the target machine should become a crime scene computer, there will certainly be occasion to make a file space backup, search for strings of interest, examine slack and unused blocks, and build a timeline of activities.

What to capture:

* A list of important and critical file resources on the target machine, along with their assigned permissions
* Any local, file space monitoring logs (if they exist)
* A list of unusually named, and hidden files

Overall, this step is clearly more involved than the previous ones due to its fully interactive nature. This makes it an ideal candidate for some level of automation through programming and/or scripting. As with the previous three steps, if evidence of a compromise is uncovered, the investigator will need to determine whether or not the target machine is a crime scene computer. If no such evidence is uncovered, the best the investigator can do is claim a low probability that the target machine has been compromised.
4. Departing Thoughts

There may be concern about the time needed to apply this forensic preview method. Going back to the opening scenario, what if it had to be immediately known whether or not the spooler was compromised? This may be a pointless question for the following reasons:

* Of course it has to be immediately known! Is it really ever okay to put something like this off?
* Because the forensic preview activities do not interrupt a target machine's production service, the investigator should be allowed to come to a conclusion as soon as possible -- not within some arbitrarily short time period. That said, this preview method is designed so that analysis happens as the four steps unfold. Doing otherwise may needlessly contaminate potential evidence
* The first three steps have the potential to be evaluated very quickly. Their speed depends on how mature an organization's monitoring processes are, and how readily available and knowledgeable the system administrator is
* The last step can be streamlined if the investigator spends time assembling the necessary tools and a plan of attack

Perhaps a more important issue is what to do if a preview fails to reveal a compromise. A secure target machine is not indicated by a failure to uncover evidence of compromise. At best, an investigator can only claim a low probability that the target machine is compromised. The next steps depend on three things:

1. The organization's policies with respect to incident handling
2. What has lead the system administrator to suspect a compromise
3. The investigator's judgment given the sensitivity and criticality of the data present on the target machine

Based on the above, the target machine might be removed from production for a more thorough examination. On the other hand, given that nothing was found in the forensic preview, the cost of service loss may outweigh the risk of leaving the machine in production. The decision (and risk) rests with an organization's management.

Perhaps the most compelling reason to use a forensic preview method is that it helps to maintain the evidentiary value of a target machine. By using a repeatable, documented method, and by carefully noting all actions taken, the investigator can rationally account for the state of gathered evidence. This is essential if a chain of custody needs to be established as more rigorous forensics operations take place.

Remember that the forensic preview is not a panacea! The bottom line is that some activities in the preview process can significantly disturb potential evidence. To manage this risk it is critical that organizations formally document and implement a preview procedure for investigators to use. Doing so will establish a sound method that can be applied in most any circumstance, assigning credibility to the actions taken by the investigator, and to the evidence gathered.


References

[ref 1] For a complete description of the search and seizure process as it relates to computer crime, please see my earlier series of articles titled "The Field Guide for Investigating Computer Crime" at http://www.securityfocus.com/infocus/1244.

[ref 2] The chain of custody, or chain of evidence is a means of accounting for who has touched a given piece of evidence, when they touched it, and what they did to the evidence.

[ref 3] A forensics toolkit suitable for previews can be found on the web at http://www.e-fense.com/helix/. While still a little rough around the edges, Helix offers tools for preview work on several computing platforms.

Related links

http://www.sans.org/score/checklists/ID_Windows.pdf

http://www.sans.org/score/checklists/ID_Linux.pdf

http://www.sysinternals.com

http://www.cisecurity.org

http://www.cert.org

http://www.cybercrime.gov/s&smanual2002.htm#_IIIA_

http://www.sleuthkit.org/index.php

http://www.securityfocus.com/infocus/1244

http://www.cycom.se/dl/sbd

http://farm9.org/Cryptcat/

http://www.e-fense.com/helix/

About the author

For the past several years, Timothy Wright has been investigating computer fraud and abuse in the private sector and, more recently, higher education. He has worked as a Senior Technology Investigator at one of America's largest financial corporations, and as a lead developer within the financial industry, designing and building web-based home banking software. He presently works as an IT Auditor for a university in the midwest United States, and holds an M.S. in Computer Science, and a B.A. in Philosophy.

Source : http://securityfocus.com

Read More......

Active Directory Design Considerations for Small Networks

A lot of people who are new to networking or who work primarily on larger networks seem to underestimate the design considerations for small networks. It kind of makes sense when you think about it though. From an Active Directory standpoint, what’s really to consider? After all, most small networks have a single forest and a single domain. Even so, your network will run a lot more smoothly if you take the time to do a little planning first. In this article, I will discuss some of the issues involved in planning a small Active Directory deployment.
The Definition of a Small Network

The word small means different things to different people. For example, I consider my own network to be small. I’m running a one man show with about 20 computers. On the other hand a fortune 500 company might consider a subsidiary with a thousand users to have a small network. For the purpose of this article, I will define a small network as a network with under a hundred users.
Domain Planning

One of the first things that you will need to plan for your small network is the domain structure. At first, this probably sounds like overkill. After all, most small networks are single forest, single domain. The argument could be made that you need to plan for future growth, but a single Windows Server 2003 domain controller can accommodate millions of objects in the directory. Even if you were using an ancient Windows NT 4.0 domain controller, the limit is still somewhere around 40,000 users. So why is it so important to plan a domain structure for such a small network?

It has to do with administrative control. More than likely, if your network has less than a hundred users, you are probably going to be the network’s only administrator. Geography has a way of changing that though. For example, imagine that those hundred employees are scattered among three different offices in three different parts of the country. Are you still going to try to manage the network for all three offices yourself, or would you prefer to have some help?

Let’s say for the sake of argument that you decide that you do want some help running the networks in the remote offices because they are so far away. The questions now are how much help do you want and how much trust do you have in the remote administrators?

These questions are important because you have a couple of options. If you just want the remote administrators to be able to reset passwords, unlock user accounts, and things like that, then you are probably best off creating an organizational unit for each remote office and then placing the user accounts from each office into the appropriate organizational unit. If on the other hand you want to completely hand over control of the remote offices to the remote administrator (you keep control of the forest) then you are probably better off creating a separate domain for each office.
Resource Planning

For the sake of argument, let’s assume that you decided to go with a single domain for your network. Regardless of whether there are any remote administrators in the picture or not, you are going to have to make some important design decisions regarding your remote offices. These decisions have to do with what types of servers (if any) you want to place in the remote facilities.

These types of decisions are always a big deal, but they are even more important in small companies because you have to balance the cost of the servers (and their impact on your budget) with the benefit that they will provide.

Let’s pretend that your company has a really tight IT budget (hmm… maybe we aren’t pretending on that part) and that you decide not to put any servers at all in the remote offices. Your network can function like this, but you are completely at the mercy of the speed and reliability of the WAN link between the remote office and the main office. If the WAN link were to go down, then nobody in the remote office will even be able to log in.

Of course WAN links go down all the time, and having a whole office full of people who can’t log in until the problem is fixed probably isn’t good for business. So let’s say that we are going to put a domain controller in each remote office so that people can log in whether the WAN link is available or not. Does this really solve the problem though? Not really. If anything, it creates some other problems.

First of all, having a locally available domain controller does not guarantee that users will be able to log in (unless we are talking about Windows NT). In an Active Directory environment, users must be able to contact a global catalog server in order to log in. The only user who can log in without access to a global catalog server is the domain Administrator. This problem is easy to fix though. You can just designate each office’s domain controller to be a global catalog server. This will allow users to log onto the network when the WAN link is down; assuming that the users can communicate with the domain controller.

Even if a domain controller is available locally, and the domain controller is designated to be a global catalog server, users won’t be able to log in if they can’t communicate with the domain controller. There are a couple of things that can cause this to happen. One reason why users might not be able to communicate with the domain controller is because they don’t have an IP address assigned to their computer. Think about that one for a minute. If the only available DHCP server is in another building and the WAN link goes down then nobody in the remote office will be able to lease an IP address. Therefore, it’s probably a good idea to have a DHCP server in the remote office.

Another reason why a domain controller might be inaccessible is because the Active Directory is completely dependant on the DNS. If the DNS server is in the main office and the WAN link goes down then clients in the remote office may not be able to resolve the name of the local domain controller.

So let’s say that you decide to spend some bucks and put a DNS server, a DHCP server, and a domain controller in the remote offices. There are still a couple of issues that you may have to deal with. One issue is excessive replication traffic. Every time the Active Directory is updated in any one of the offices (such as adding a user account or changing a password) the update is propagated across the WAN link to the other domain controllers in the other offices. If the Active Directory is updated frequently, this replication traffic can really put a strain on your bandwidth.

The solution here is to create a separate Active Directory site for each office. Active Directory replication traffic will still need to be sent to the domain controllers in the remote offices, but it can be scheduled and sent in batches rather than constantly flooding the WAN link with replication traffic.

The other problem that you might run into is availability of data. Assuming that the remote offices have a domain controller, a global catalog server, a DHCP server, and a DNS server, then users in that office will be able to log in even if the WAN link goes down. However, being able to log in doesn’t mean much if the users can’t access their data.

There are a couple of ways around this problem. The appropriate course of action would depend on whether or not data is shared among the various offices. If there is no need to share data between offices, then the best course of action is probably to put a file server in each office and have the users save their data directly to that server. If data does need to be shared among offices, then you are probably best off setting up a DFS server in each office. That way, each office contains a server with a full replica of the company’s data. If a WAN link goes down, users can still access the entire data set. When the WAN link comes back up then any changes that have been made to the data are synchronized with the other DFS servers in the other offices.
Conclusion

In this article, I have mentioned a lot of things that need to be present in the remote offices so that users can continue to work even if a WAN link goes down. If budget is a concern, you can probably get by with lumping all of these roles into a single server. It’s usually considered to be a best practice not to use a domain controller as a file server (for security and performance reasons). You have to do what’s appropriate for your individual company. In a situation like the one that I described above, I would recommend placing two servers in each remote office. One server would act as a domain controller, global catalog, DHCP, and DNS server. The other would act as a file server (possibly a DFS server).

About Brien M. Posey

Brien Posey is an award winning author who has written over 3,000 articles and written or contributed to 27 books. You can visit Brien’s personal Web site at www.brienposey.com

Source : /www.windowsnetworking.com

Read More......