Internet Security Privacy Policy

Friday, August 31, 2007

Linux Firewall-related /proc Entries

By: Brian Hatch 2003-07-14

Most people, when creating a Linux firewall, concentrate soley on manipulating kernel network filters: the rulesets you create using userspace tools such as iptables (2.4 kernels,) ipchains (2.2 kernels,) or even ipfwadm (2.0 kernels).

However there are kernel variables -- independent of any kernel filtering rules -- that affect how the kernel handles network packets. This article will discuss these variables and the effect they have on the network security of your Linux host or firewall.
What is Linux's /proc directory?
There are many settings inside the Linux kernel that can vary from machine to machine. Traditionally, these were set at compile time, or sometimes were modifiable through oft-esoteric system calls. For example each machine has a host name which would be set at boot time using the sethostname(2) system call, while iptables reads and modifies your Netfilter rules using getsockopt(2) and setsockopt(2), respectively.

Modern Linux kernels have many settings that can be changed. Providing or overloading a plethora of system calls becomes unwieldy, and forcing administrators to write C code to change them at run time is a pain. Instead, the /proc filesystem was created.[1] /proc is a virtual filesystem -- it does not reside on any physical or remotely mounted disk -- that provides a view of the system configuration and runtime state.

The /proc filesystem can be navigated just like any filesystem. Entries all appear to be standard files, directories, and symlinks, but are actually views into the kernel information itself. Some of these can be modified by root, but most are read only. To view these files, cat and more are your friends:

# cd /proc
# ls -l version
-r--r--r-- 1 root root 0 Jun 20 18:30 /proc/version
# cat version
Linux version 2.4.21 (guru@example.com) (gcc version 2.95.4 20011002) ...

Note that the kernel fudges a bit the ls output - these files will appear to have content when viewed, but will always have a length of 0 bytes. Rather than waste time figuring out how much output would be produced if the file were viewed, the kernel just reports 0 for most statistics, and gives the current time for all timestamps.

/proc/sys

All the /proc entries that can be modified live inside the /proc/sys directory. You can modify these in two different ways, using standard unix commands and via sysctl. The following examples show how you can set the hostname using both methods:

Changing /proc pseudo-files manually

# ls -l /proc/sys/kernel/hostname
-r--r--r-- 1 root root 0 Jun 20 18:30 /proc/sys/kernel/hostname

# hostname
catinthehat

# cat /proc/sys/kernel/hostname
catinthehat

# echo 'redfishbluefish' > /proc/sys/kernel/hostname

# hostname
redfishbluefish


Changing /proc pseudo-files via sysctl

# hostname
redfishbluefish

# sysctl kernel.hostname
kernel.hostname = redfishbluefish

# sysctl -w kernel.hostname=hop-on-pop
kernel.hostname = hop-on-pop

# hostname
hop-on-pop

Note that the main difference between these two methods is that sysctl uses dots[2] as a separator instead of slashes, and the initial proc.sys is assumed. sysctl can be run with a file as an argument, in which case all variable modifications in that file are performed:

# hostname
hop-on-pop

# cat reset_hostname
kernel.hostname=butterbattlebook

# sysctl -p reset_hostname
; Set our hostname
kernel.hostname=butterbattlebook
;
; Turn on syncookies
net.ipv4.tcp_syncookies = 1

# hostname
butterbattlebook

If -p is used and no filename is provided, the file /etc/sysctl.conf will be read.

The changes you make to /proc variables affect only the currently running kernel - they will revert back to the compile-time defaults at the next reboot. If you wish your changes to be permanent, you can either create a startup script that sets variables to your liking, or you can create a /etc/sysctl.conf file. Most Linux distributions will run sysctl -p at some point during the normal bootup process.

Firewall-related /proc entries
While there are many different kernel variables you can tweak, this article will only discuss those specifically related to protecting your Linux machine from network attacks. Also, we'll restrict ourselves to the IPv4 version, rather than IPv6, since the latter inherits variables settings from the former where appropriate anyway.

If you're interested in learning about other kernel variables, read the proc(5) man page. There are also several files in the kernel source inside the Documentation directory that may provide more information, /usr/src/linux/Documentation/filesystems/proc.txt and /usr/src/linux/Documentation/networking/ip-sysctl.txt are good starting points.

Some kernel variables are integers, such as kernel.random.entropy_avail which contains the bytes of entropy available to the random number generator. Others are arbitrary strings, such as fs.inode-state which contains the number of allocated and free kernel inodes separated by spaces. However most of the firewall-related variables are simple binary values where of '1' means 'on' and '0' means off.

A Linux machine can have more than one interface, and you can set some variables on different interfaces independently. These are in the /proc/sys/net/ipv4/conf directory, which contains all the current interfaces available, such as lo, eth0, eth1, or wav0, and two other directories, all and default.

When you change variables in the /proc/sys/net/ipv4/conf/all directory, the variable for all interfaces and default will be changed as well. When you change variables in /proc/sys/net/ipv4/conf/default, all future interfaces will have the value you specify. This should only affect machines that can add interfaces at run time, such as laptops with PCMCIA cards, or machines that create new interfaces via VPNs or PPP, for example.

Proc files
Below are /proc settings that you can tweak to secure your network configuration. I've prepended each filename with either enable (1) or disable (0) to show you my suggested settings where applicable. You can actually use the following handy shell functions to set these in a startup script if you prefer:

enable () { for file in $@; do echo 1 > $file; done }
disable () { for file in $@; do echo 0 > $file; done }

enable /proc/sys/net/ipv4/icmp_echo_ignore_all
When enabled, ignore all ICMP ECHO REQUEST (ping) packets. Does nothing to actually increase security, but can hide you from ping sweeps, which may prevent you from being port scanned. Nmap, for example, will not scan unpingable hosts unless -P0 is specified. This will prevent normal network connectivity tests, however.

enable /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
When enabled, ignore broadcast and multicast pings. It's a good idea to ignore these to prevent you from becoming an inadvertent participant in a distributed denial of service attack, such as Smurf.

disable /proc/sys/net/ipv4/conf/*/accept_source_route
When source routed packets are allowed, an attacker can forge the source IP address of connections by explicitly saying how a packet should be routed across the Internet. This could enable them to abuse trust relationships or get around TCP Wrapper-style access lists. There's no need for source routing on today's Internet.

enable /proc/sys/net/ipv4/conf/*/rp_filter
When enabled, if a packet comes in on one interface, but our response would go out a different interface, drop the packet. Unnecessary on hosts with only one interface, but remember, PPP and VPN connections usually have their own interface, so it's a good idea to enable it anyway. Can be a problem for routers on a network that has dynamically changing routes. However on firewall/routers that are the single connection between networks, this automatically provides spoofing protection without network ACLs.

disable /proc/sys/net/ipv4/conf/*/accept_redirects
When you send a packet destined to a remote machine you usually send it to a default router. If this machine sends an ICMP redirect, it lets you know that there is a different router to which you should address the packet for a better route, and your machine will send the packet there instead. A cracker can use ICMP redirects to trick you into sending your packets through a machine it controls to perform man-in-the-middle attacks. This should certainly never be enabled on a well configured router.

disable /proc/sys/net/ipv4/conf/*/secure_redirects
Honor ICMP redirects only when they come from a router that is currently set up as a default gateway. Should only be enabled if you have multiple routers on your network. If your network is fairly static and stable, it's better to leave this disabled.

disable /proc/sys/net/ipv4/conf/*/send_redirects
If you're a router and there are alternate routes of which you should inform your clients (you have multiple routers on your networks), you'll want to enable this. If you have a stable network where hosts already have the correct routes set up, this should not be necessary, and it's never needed for non-routing hosts.

disable /proc/sys/net/ipv4/ip_forward
If you're a router this needs to be enabled. This applies to VPN interfaces as well. If you do need to forward packets from one interface to another, make sure you have appropriate kernel ACLs set to allow only the traffic you want to forward.

(integer) /proc/sys/net/ipv4/ipfrag_high_thresh
The kernel needs to allocate memory to be able to reassemble fragmented packets. Once this limit is reached, the kernel will start discarding fragmented packets. Setting this too low or high can leave you vulnerable to a denial of service attack. While under an attack of many fragmented packets, a value too low will cause legitimate fragmented packets to be dropped, a value too high can cause excessive memory and CPU use to defragment attack packets.

(integer) /proc/sys/net/ipv4/ipfrag_low_thresh
Similar to ip_frag_high_thresh, the minimum amount of memory you want to allow for fragment reassembly.

(integer) /proc/sys/net/ipv4/ipfrag_time
The number of seconds the kernel should keep IP fragments before discarding them. Thirty seconds is usually a good time. Decrease this if attackers are forging fragments and you'll be better able to service legitimate connections.

enable /proc/sys/net/ip_always_defrag
Always defragment fragmented packets before passing them along through the firewall. Linux 2.4 and later kernels do not have this /proc entry, defragmentation is turned on by default.

(integer) /proc/sys/net/ipv4/tcp_max_orphans
The number of local sockets that are no longer attached to a process that will be maintained. These sockets are usually the result of failed network connections, such as the FIN-WAIT state where the remote end has not acknowledged the tear down of a TCP connection. After this limit has been reached, orphaned connections are removed from the kernel immediately. If your firewall is acting as a standard packet filter, this variable should not come into play, but it is helpful on connection endpoints such as Web servers. This variable is set at boot time to a value appropriate to the amount of memory on your system.

Other related variables that may be useful include tcp_retries1 (how many TCP retries we send before giving up), tcp_retries2 (how many TCP retries we send that are associated with an existing TCP connection before giving up), tcp_orphan_retries (how many retries to send for connections we've closed), tcp_fin_timeout (how long we'll maintain sockets in partially closed states before dropping them.) All of these parameters can be tweaked to fit the purpose of the machine, and are not purely security related.

(integer) /proc/sys/net/ipv4/icmp_ratelimit
(integer) /proc/sys/net/ipv4/icmp_ratemask
Together, these two variables allow you to limit how frequently specified ICMP packets are generated. icmp_ratelimit defines how many packets that match the icmp_ratemask per jiffie (a unit of time, a 1/100th of a second on most architectures) are allowed. The ratemask is a logical OR of all the ICMP codes you wish to rate limit. (See /usr/include/linux/icmp.h for the actual values.) The default mask includes destination unreachable, source quench, time exceeded and parameter problem. If you increase the limit, you can slow down or potentially confuse port scans, but you may inhibit legitimate network error indicators.

enable /proc/sys/net/ipv4/conf/*/log_martians
Have the kernel send syslog messages when packets are received with addresses that are illegal.

(integer) /proc/sys/net/ipv4/neigh/*/locktime
Reject ARP address changes if the existing entry is less than this many jiffies old. If an attacker on your LAN uses ARP poisoning to perform a man-in-the-middle attack, raising this variable can prevent ARP cache thrashing.

(integer) /proc/sys/net/ipv4/neigh/*/gc_stale_time
How often in seconds to clean out old ARP entries and make a new ARP request. Lower values will allow the server to more quickly adjust to a valid IP migration (good) or an ARP poisoning attack (bad).

disable /proc/sys/net/ipv4/conf/*/proxy_arp
Reply to ARP requests if we have a route to the host in question. This may be necessary in some firewall or VPN/router setups, but is generally a bad idea on hosts.

enable /proc/sys/net/ipv4/tcp_syncookies
A very popular denial of service attack involves a cracker sending many (possibly forged) SYN packets to your server, but never completing the TCP three way handshake. This quickly uses up slots in the kernel's half open queue, preventing legitimate connections from succeeding. Since a connection does not need to be completed, there need be no resources used on the attacking machine, so this is easy to perform and maintain.

If the tcp_syncookies variable is set (only available if your kernel was compiled with CONFIG_SYNCOOKIES) then the kernel handles TCP SYN packets normally until the queue is full, at which point the SYN cookie functionality kicks in.

SYN cookies work by not using a SYN queue at all. Instead the kernel will reply to any SYN packet with a SYN|ACK as normal, but it will present a specially-crafted TCP sequence number that encodes the source and destination IP address and port number and the time the packet was sent. An attacker performing the SYN flood would never have gotten this packet at all if they're spoofing, so they wouldn't respond. A legitimate connection attempt would send the third packet of the three way handshake which includes this sequence number, and the server can verify that it must be in response to a valid SYN cookie and allows the connection, even though there is no corresponding entry in the SYN queue.

Enabling SYN cookies is a very simple way to defeat SYN flood attacks while using only a bit more CPU time for the cookie creation and verification. Since the alternative is to reject all incoming connections, enabling SYN cookies is an obvious choice. For more information about the inner workings of SYN cookies, see http://cr.yp.to/syncookies.html

Summary
When creating a Linux firewall, or hardening a Linux host, there are many kernel variables that can be utilized to help secure the default networking stack. Coupled with more advanced rules, such as Netfilter (iptables) kernel ACLs, you can have a very secure machine with a minimum of fuss.

Brian Hatch is the author of Hacking Linux Exposed, 2nd Edition, Building Linux VPNs, and of the weekly Linux Security: Tips, Tricks, and Hackery Newsletter. While he admits the /proc interface is extremely powerful, he prefers to change kernel variables by modifying /dev/kmem manually using 'dd if=/dev/random of=/dev/kmem bs=2 count=1 seek=...'


Relevant Links

[1] Actually, kernel variables have been tweakable via the _sysctl(2) call since the olden days of the Linux kernel. Unfortunately, the actual kernel variable names change between versions, whereas the locations inside the /proc filesystem are more static, so _sysctl(2) is depreciated.

[2] Sysctl can use slashes instead of dots, actually, but it is traditional/historical to use dots instead.


Source : http://securityfocus.com

Read More......

The Enemy Within: Firewalls and Backdoors

Can your security infrastructure protect you when you've left the key under the mat?

As a modern IT professional you've done all the right things to keep the "bad guys" out: you protected your network with firewalls and/or proxies, deployed anti-virus software across all platforms, and secured your mobile workstations with personal firewalls. You may even be in the process of designing and deploying an enterprise-wide network and host intrusion detection framework to help keep an even closer eye on what's going on. Even with all this, are you really safe? Can your multiple-lines of defense truly protect your network from modern methods of intrusion?

This article presents an overview of modern backdoor techniques, discusses how they can be used to bypass the security infrastructure that exists in most network deployments and issues a wake-up call for those relying on current technologies to safeguard their systems/networks.

The Fundamentals of Firewalls

Before a discussion of modern backdoor techniques can take place, it is necessary to first look at what obstacles an attacker must get through. Firewalls are an integral part of a comprehensive security framework for your network. If they are relied on too heavily they can also be the weakest link in your defense strategy.

There are different flavors/combinations of "standard" firewalls to choose from depending on your environment:

Packet filters [1]

* Operates at Layer 3
* Also known as Port-based firewalls
* Each packet is compared to against a list of rules (source/destination address, source/destination port, protocol)
* Inexpensive and fast, but least secure
* 20-year old technology
* Breaks more complex applications (e.g. FTP)
* Example: router access control lists (ACL)

Circuit-level gateways

* Operates at Layer 4
* Relay TCP connections based on port
* Inexpensive but more secure than packet filters
* Generally requires work on the user or application configuration end to support
* Example: SOCKS-based firewalls

Application-level gateways [2]

* Operates at Layer 5
* Application-specific
* Moderately expensive and slower, but more secure and enables user activity logging
* Generally requires work on the user, network or application-configuration end to support
* Example: Web (http) proxy

Stateful, multi-layer inspection firewalls [3]

* Layer 3 filtering
* Layer 4 validation
* Layer 5 inspection
* High level of cost, security and complexity
* Example: CheckPoint Firewall-1

Some newer firewall technologies build upon these foundations and provide additional ways of securing both systems and networks:

"Personal"/host firewalls

This class of firewall has the ability to further enhance security by enabling granular control over what types of system functions and processes have access to networking resources. These firewalls can use various types of signatures and host conditions to allow or deny traffic. Some of the more common functions across personal firewall implementations include:

* Protocol-driver blocking - disallow "non-standard" protocol drivers to be loaded and used by programs
* Application-level blocking - only allow certain applications or libraries to perform network actions or accept incoming connections
* Signature-based blocking - constantly monitor the network traffic and block all known attacks from making it to the host

The added control increases the difficulty of managing security due to the potentially large numbers of systems that may be individually firewalled. It also increases the risk of damage and exposure due to misconfiguration.

Dynamic Network Firewalling

Similar to the signature-based blocking features of personal firewalls, dynamic network firewalling marries the concepts of IDS, standard firewalls (see above) and emerging intrusion prevention techniques to provide "on-the-fly" blocking of specific network connections that fit a defined profile while allowing connections from other sources to the same port(s). This allows a firewall to proactively deny access to, say, clients that are issuing SQL worm attacks against your network while still allowing standard SQL traffic to flow.

The Basics of Backdoors

What is a backdoor? A backdoor is a "mechanism surreptitiously introduced into a computer system to facilitate unauthorized access to the system,"[4] and can be classified into (at least) three categories:

Active

Active backdoors originate outbound connections to one or more hosts. These connections can either provide full, fluid network access between the hosts (i.e. reverse tunnel-based) or be part of a process that actively monitors the compromised system, records information, sends data out in distinct "chunks" and receives both acknowledgements and/or commands from the remote systems.

Passive

Passive backdoors listen on one or more ports for incoming connections from one or more hosts. Similar to the active backdoors, these programs can either be used to establish a forward tunnel into the compromised network or accept distinct commands and return the requested information.

Attack-based

This category of backdoor could also be classified as the "unknown backdoor." It generally arises from a buffer-overflow exploit of poorly-written programs resulting in some type (e.g. root/Administrator-level, user-level, fully-interactive, one-instruction) of command-level access to the compromised system.

There is one common element among the three types of backdoors - they all work by circumventing the elaborate multi-layer security infrastructure you have worked diligently to design and deploy. Most real (i.e. non-script-kiddies) hackers can determine almost immediately if it's worth attempting to meet your perimeter routers and firewalls with a head-on attack. Textbook methods can be relatively easily employed to help discover the types and configurations equipment protecting the borders of your network. Some of these discovery tools can even help detect the presence of proactive network intrusion detection systems (IDS). While there are still daily exceptions, most perimeter networks are configured well enough to make backdoors the emerging method-of-choice for deep-network penetration for a number of reasons:

They avoid immediate detection by well-configured firewalls, network & host IDS.

A perimeter attack will (or should) make your operations consoles light up like a Christmas tree. There is no such thing as a casual or accidental scan of open firewall ports. If you don't have a penetration test scheduled, chances are that you're being probed.

Some proactive environments will immediately lock-out the originating systems' IP address when these scans are detected. Even if this is not the case, risking detection removes the primary reason for getting into your environment: the ability to operate freely and without notice.

They don't rely on potentially hard-to-duplicate, specialized attack methods.

What is more difficult: constructing the precise SYN-Frag attack necessary to cause a buffer-overflow in a CheckPoint firewall (that is two revisions behind the latest patch-level) to render it as helpless as a router without ACLs, or getting an unwitting user to open up an e-mail attachment?

To make it past the outer defenses, it might require the use of 4-6 of these specialized attack methods with no guarantees that one of them won't cause a crash and reboot, rendering the entire attempt useless.

They take advantage of the myriad of exploits available in the soft underbelly of an organization's internal network.

How many Microsoft Windows-based workstations and servers are in your organization? How many *nix systems do you have? How many users do you have with each of these types of systems? How many routers, firewalls and IDS systems do you have?

Chances are significantly higher that in most organizations a hacker will have a much easier time finding an un-patched Windows or *nix system to exploit than they will an un-patched and/or misconfigured piece of perimeter networking/security equipment.

An Inside Job

While this article has presented the concept of backdoors in the context of external penetration attempts, they are not limited to that narrow area of practice. Backdoors can be used by employees, contractors or planted-workers to provide less restrictive and undetectable "remote access" points all across your network.

Regardless of the type of backdoor, there are two primary ways of injecting them into your network. The first method involves getting a user to inadvertently load and run the program on their system(s). Extremely common examples of this include e-mail attachments that exploit un-patched vulnerabilities in client systems, web sites/downloads that have an unexpected/hidden payload, and programs that fall into the classification of "spyware". Unfortunately, these methods are all too common and can result in serious loss of confidentiality and privacy. In the case of "spyware", programs are installed, registry keys are inserted and browser cookies are set that enable the tracking of every network-based move a user makes. This tracking is not limited to Internet sites, which thus make it very easy for these systems to map out all the important places on a company's intranet. While the majority of the "spyware" programs are used to present and track your viewing of web ads, others can be crafted to be sentinels to alert remote sites of your online/offline status, complete with current network connection information.

Even without loading malicious "spyware" backdoors, a user can still be susceptible to a more corporate form of backdoor. Real Networks player performs constant communication to its home network and is nearly impossible to deactivate without reinstalling. Microsoft XP users have the ability to be tracked by either enabling automatic updates or just having their time kept in sync by Microsoft's own time server.

The second method involves actually being on your network in the first place. A trivial example would be installing a custom-program which has a programmer-created backdoor embedded in it. These types of backdoors can be malicious, but they are usually coded as a means of circumventing standard software development processes in order to save time.

A more typical, network-level, generic example would be one which is used to bypass remote access restrictions. This may be the oldest form (relative to the early stages of the Internet) of backdoor, initially used to bypass inbound telnet/rlogin restrictions. The setup is rather straightforward: a user installs a program that doesn't require elevated privileges to execute, then the program is run and it waits for connections on a port that isn't blocked by upstream access control devices. This remote access could be to a multi-user system or to an individual's workstation. Initially only Unix-oriented, these types of programs can be difficult to detect.

These types of backdoors are easier to understand in the context of concrete examples:
Program: BindShell
Available at: http://hysteria.sk/sd/f/junk/bindshell/bindshell.c
Type: PASSIVE

This program is easily modified to run on any defined port - for this example, TCP 1234 - and doesn't support a password, thus allowing anyone access. To access this service, the remote user simply starts a telnet session to the desired host and specifies a port number:

telnet some.insecure.host.org 1234

Variations of this program can also be found at http://packetstormsecurity.nl/ which support UDP connections and encrypted sessions.

There are several techniques that can be used to attempt to detect this, none of which will provide simple or direct isolation. In all cases knowledge of the normal run state of the OS is necessary.

* 'netstat -a' is a program that comes as part of the UNIX operating system and is used to display network port connection status. One would look for port usage that isn't part of the normal run state.
* 'nmap'[5] or 'strobe'[6] external port scanners could be used to identify active or listening ports. Again, knowledge of a normal run state would be extremely helpful.
* 'lsof -i'[7] a public domain program, can be used to list all open files and their resource usage. One would search the output for users running unusual programs that require the use of networking ports.

Program: Sneakin
Available at: http://packetstormsecurity.org/Exploit_Code_Archive/sneakin.tgz
Type: ACTIVE

This program requires elevated privileges and basically waits for two specially-crafted ICMP packets to arrive before starting something very similar to a reverse telnet session which establishes a connection to a remote machine. Sneakin requires LINUX and netcat[8].

The "listening" state is just as difficult to detect as in the above example. A conventional external port scan will not work since the program intercepts and processes ICMP packets while still allowing access to them by the native operating system kernel. LSOF, however will show a process accessing the network adapter in promiscuous mode. In general, LSOF might be the best tool available to detect NICs in this state. Netstat will also provide a clue to this particular backdoor, as it will show two ICMP ports using the raw protocol. Once "sneakin" enters it's ACTIVE state, additional processes using network ports will show up in LSOF and Netstat output.
Program: GlFtpD
Available at: http://www.security-express.com/archives/bugtaq/1999-q4/0443.html
Type: ATTACK

GlFtpD is one of the standard examples of an attack-based backdoor. The premise behind it is simple: an attacker would take advantage of a few misconfigured features of an ftp server, allowing them to deposit and execute backdoor code, in this case BindShell. A weak inbound policy combined with un-proxied, weak outbound policies do the rest.

Sneakin and bindshell are classic tools used against weak inbound firewall policies. Many sites deploy extremely strong inbound policies, making it difficult to gain direct access to the listening ports. Without direct access, a large number of backdoors cannot be exploited. However, the strongest inbound policy can be easily defeated by active backdoors using "tunneling" methodologies. A tunnel, in the context of backdoors, is best explained as a program that sits on the inside of a protected network and establishes an outbound connection to an external host which results in the flow of bi-directional traffic between these systems and/or networks. This is a serious threat to even the most modern security architectures. A popular example of such communications would be to create an encrypted network connection between two hosts using VPN software.

Properly configured, a VPN tunnel will allow total and unrestricted access to the networks that the hosts are gateways for. When provided as a legitimate remote access tool for employees and business partners, VPNs can increase productivity, save time and reduce costs. When they are used to exploit gaps in the security architecture, they can have just the opposite effect.

VPN technology is still fairly new and requires more than casual knowledge to setup and maintain when used legitimately. The learning curve is even steeper when they are being used as a backdoor tool. You don't need a VPN for a tunnel. Taking a step back, it is possible to connect just two hosts using more traditional and widely known software - secure shell. Secure shell - or SSH[9] as it is more commonly referenced - can be used to establish a tunnel between two hosts by allowing the redirection of a port on the client (outside the firewall) to a port on the host (behind the firewall). For example, one could redirect a client port 2200 to host port 23. Assuming the user is currently accessing the client (outside the firewall), they would telnet to the localhost port 2200 and get port 23 on the remote host (behind the firewall). A weak outbound policy allows the connection to be generated from the host behind the firewall. This is a neat and popular trick.

In the same scenario it is also fairly straightforward to provide access to an organization's internal web sites. The user would simply install a copy of a proxying agent - e.g. "squid"[10] web proxy or the Apache "httpd"[11] daemon with proxy support compiled in - on some internal system. The standard software configuration could be used for either agent. The user would then use SSH port redirection to connect client port 3128 to host port 3128. The client, again outside the firewall, now has proxied access to the organization's internal web servers thru proxy port 3128.

This example can be extended further to enable more than one external host to have access to the internal web sites. The addition of a simple port redirector[12] can make the tunneled, proxied connection available (on port 3128) to all users of the remote network.

Conventional techniques will not work in identifying the existence of this type of tunnel. Depending on the platform used, one could monitor network usage and look for consistent or seemingly permanent processes with established network connections to the outside. At a host level, identifying backdoors in this manner would necessitate the building and maintenance of a baseline network usage state (possibly using the tools mentioned earlier). It is also possible to query the boundary firewalls and monitor the connection state tables, focusing on these established connections. Either process is a daunting task in busy/large environments.

A Ready Defense

There is little one can do to completely defend their network from the use of backdoors. The current set of tools - whether it be host or network IDS - are difficult to configure, deploy and use effectively, especially in large organizations. Without the development of special-purpose tools, expressly designed to monitor systems and networks for the presence of backdoors, the only way to defend against these techniques is through a change in thinking. Security managers who think they can simply hide their networks behind a firewall, sit back and declare that "nobody can get in, I closed all the doors" need to take a hard look at their line of thinking. A good defense against backdoors needs to start with a change in network access philosophy. A solid beginning would be to develop strong Internet access policies and implement technologies that limit outbound access via well-configured firewall/outbound-access architectures.

At a network level, stopping backdoors means making it very difficult for them to establish connections outside of your infrastructure. One approach would be to use circuit-level gateways (i.e. SOCKS/port redirection) as a means of restricting backdoors from using high (or any) TCP ports. With simple port redirection, network requests destined for an external endpoint are terminated internally at a device which makes the connection on its behalf to a pre-defined endpoint. While this limits the number of external resources applications can access, it can also create additional administrative and processing overhead and may not work for all applications. With modern SOCKS gateways, the administration can be done on a global policy level with little impact on performance and almost no impact on applications.

An alternative approach would be through the use of (again) highly restrictive outbound access policies - where very few direct outbound connections are allowed - and Web/application-specific proxies that force authentication before access is enabled. The goal is similar to port-redirection: stop unchecked access to external hosts. While most traditional security schools would like nothing more than to close all the doors and windows, modern businesses need access to external resources to function. Unfortunately, almost any outbound access mechanism can potentially be used to provide a conduit for backdoors. Proxy-based architectures enable granular control over what is allowed outside of your network since applications need to "speak the right language" to be permitted access. Tunnels can be established[13] through proxies (especially via SSL connections[14]) but they are much harder to configure, deploy correctly and rely on. With authentication thrown into the mix, a way now exists to identify all connections down to the source (user). All the pieces are then in place to deter (where possible), detect and discover backdoors.

Even with these techniques - which will require time and resources to implement - developing a network access architecture which makes it easy for users to get work done and difficult for backdoors to do their job is not a trivial endeavor. Fundamentally, your aim should be to design an infrastructure that makes it as efficient as possible to tie network connections to users while narrowing down the options for the backdoors.
[1]The Packet Filter: A Basic Network Security Tool - http://www.sans.org/rr/firewall/packet_filter.php
[2]Application-Level Firewalls: Smaller Net, Tighter Filter - http://www.networkcomputing.com/1405/1405f3.html
[3]Anatomy of a Stateful Firewall - http://www.sans.org/rr/firewall/anatomy.php
[4]Detecting Backdoors - http://www.icir.org/vern/papers/backdoor/
[5]nmap home - http://www.insecure.org/
[6]strobe source code - http://www.packetstormsecurity.org/UNIX/scanners/strobe-1.04.tgz
[7]lsof main distribution - ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof
[8]Netcat - The TCP/IP Swiss Army Knife - http://www.sans.org/rr/audit/netcat.php
[9]OpenSSH home - http://www.openssh.org/
[10]Squid Web Proxy Cache home - href="http://www.squid-cache.org/"
[11]Apache httpd Project - http:///httpd.apache.org/
[12]Port Forwarding Tools - http://nucleo.freeservers.com/portfwd/tools.html
[13]rwwwshell - http://www.thc.org/releases/rwwwshell-2.0.pl.gz
[14]ssh-tunnel.pl - http://www.fwtk.org/fwtk/patches/ssh-tunnel.pl

Source : http://securityfocus.com

Read More......

Potential Trend Micro ServerProtect Security Risk

Vulnerability Identifier: CVE-2007-1070
Discovery Date: Aug 22, 2007
Related Malware: BKDR_IRCBOT.AJZ
Affected Software:

* Trend Micro ServerProtect for Microsoft Windows 5.58

Description:

Trend Micro has recently been informed by SANS Internet Storm Center (ISC) that there is an increase in scans of port 5168, which is a key communication port utilized by the Trend Micro ServerProtect software.

Trend Micro has been made aware of potential vulnerabilities in ServerProtect and has been actively working on developing patches to eliminate these vulnerabilities. This sudden increase in scanning traffic could indicate that malicious entities may be looking for ways to exploit vulnerable machines.

To our knowledge, there are no confirmed exploits of this vulnerability to date. Nevertheless, we implore security administrators to apply the latest ServerProtect security patch available from Trend Micro as soon as possible to protect against any potential attack.

Patch Information:

The latest security patches and ReadMe text files can be found at the following locations:

* English (Security Patch 4):
http://www.trendmicro.com/download/product.asp?productid=17
* Japanese (Security Patch 2):
http://www.trendmicro.co.jp/download/product.asp?productid=17
* Traditional Chinese (Security Patch 3):
http://www.trendmicro.com/download/zh-tw/product.asp?productid=17

For additional questions and/or concerns, contact your local Trend Micro support representative.
Source : http://trendmicro.com

Read More......

(MS07-050) Vulnerability in Vector Markup Language Could Allow Remote Code Execution (938127)

Vulnerability Identifier: CVE-2007-1749
Discovery Date: Aug 14, 2007
Risk: Critical
Affected Software:

* Microsoft Internet Explorer 5.01 Service Pack 4
* Microsoft Internet Explorer 6 (Microsoft Windows Server 2003 Service Pack 1)
* Microsoft Internet Explorer 6 (Microsoft Windows Server 2003 Service Pack 2)
* Microsoft Internet Explorer 6 (Microsoft Windows Server 2003 with SP1 for Itanium-based Systems)
* Microsoft Internet Explorer 6 (Microsoft Windows Server 2003 with SP2 for Itanium-based Systems)
* Microsoft Internet Explorer 6 (Microsoft Windows Server 2003 x64 Edition Service Pack 2)
* Microsoft Internet Explorer 6 (Microsoft Windows Server 2003 x64 Edition)
* Microsoft Internet Explorer 6 (Microsoft Windows Server 2003)
* Microsoft Internet Explorer 6 (Microsoft Windows XP Professional x64 Edition Service Pack 2)
* Microsoft Internet Explorer 6 (Microsoft Windows XP Service Pack 2)
* Microsoft Internet Explorer 6.0 Service Pack 1 (Microsoft Windows XP 64-Bit Edition)
* Microsoft Internet Explorer 7 (Microsoft Windows Server 2003 Service Pack 1)
* Microsoft Internet Explorer 7 (Microsoft Windows Server 2003 Service Pack 2)
* Microsoft Internet Explorer 7 (Microsoft Windows Server 2003 with SP1 for Itanium-based Systems)
* Microsoft Internet Explorer 7 (Microsoft Windows Server 2003 with SP2 for Itanium-based Systems)
* Microsoft Internet Explorer 7 (Microsoft Windows Server 2003 x64 Edition Service Pack 2)
* Microsoft Internet Explorer 7 (Microsoft Windows Server 2003 x64 Edition)
* Microsoft Internet Explorer 7 (Microsoft Windows XP Professional x64 Edition Service Pack 2)
* Microsoft Internet Explorer 7 (Microsoft Windows XP Professional x64 Edition)
* Microsoft Internet Explorer 7 (Microsoft Windows XP Service Pack 2)
* Windows Vista
* Windows Vista x64 Edition

Description:

This security update resolves a privately reported vulnerability in the Vector Markup Language (VML) implementation in Windows. The vulnerability could allow remote code execution if a user views a specially crafted Web page using Internet Explorer. Users whose accounts are configured to have fewer user rights on the system could be less affected than users who operate with administrative user rights.

An attacker could exploit the said vulnerability by creating a specially crafted Web page or HTML e-mail. When a user views the Web page or the message, the vulnerability could allow remote code execution.

Patch Information:

Patches for this vulnerability are available at:

http://www.microsoft.com/technet/security/bulletin/MS07-050.mspx



Source : http://trendmicro.com

Read More......

Wednesday, August 29, 2007

Introduction to IPAudit

IPAudit is a handy tool that will allow you to analyze all packets entering and leaving your network. It listens to a network device in promiscuous mode, just as an IDS sensor would, and provides details on hosts, ports, and protocols. It can be used to monitor bandwidth, connection pairs, detect compromises, discover botnets, and see whos scanning your network. When compared to similar tools, such as Cisco System's Netflow it has many advantages (see the SecurityFocus articles on Netflow, part 1 and part 2). It is easier to setup than Netflow, and if you install it on your existing IDS sensors, there is no extra hardware to purchase. Since it captures traffic from a span port, it does not require that you modify the configuration of your networking equipment, or poke holes in firewalls for Netflow data.


Packet analysis tools like IPAudit help fill the gaps left by an IDS system or an IPS system. How does it do this? An IDS looks for certain signatures or behavior and can alert and log. An IPS looks for the same anomalies but can prevent the attack. Both of these technologies can greatly increase the security of your network -- however, what happens if they miss an attack? How would you know? Even if the IDS sensor matches a packet, a machine can still become compromised. When this happens how do you tell what happened on the network after the compromise? IPAudit can help fill the gaps, in addition to providing you with useful information about your network beyond specific security events. It is most often used by universities where its primary role is to identify who is using the most bandwidth. The author of this article finds it to be useful for all organizations; in fact, many corporate customers will also recognize the benefits and incorporate it into their security tool arsenal.
Installation and configuration
IPAudit is a perl-based application written by John Rifkin at the University Of Connecticut. It can be downloaded from Sourceforge and is licensed under the GNU GPL. IPAudit is a command line tool that uses the libpcap library to listen to traffic and generate data. The IPAudit-Web package includes the IPAudit binary in addition to the web interface that creates reports based on the collected data. Using the Web package is recommended, as it gives you a slick graphical interface complete with traffic charts and a search feature.

You will need to have a Linux or Unix system setup with the libpcap library installed. The latest version can be downloaded from tcpdump.org. In addition to libpcap, you will need Perl, Apache, GNUplot, and a perl module called "Time::ParseDate". Refer to your Linux distribution's documentation for more information on how to install these packages (here's a tip: In Debian Linux, execute the command 'apt-get install libtime-modules-perl' to install Time::ParseDate). Once you have installed these packages you are ready to begin installing IPAudit:

Step 1 - Become root on your system and create a user called "ipaudit". It will need a valid shell and home directory (typically /home/ipaudit, which will be used in this article for simplicity). Now switch to the newly created "ipaudit" user.

Step 2 - Download and unpack the ipaudit-web tarball:

$ tar zxvf tar zxvf ipaudit-web-1.0BETA9.tar.gz

Step 3 - Change to the compile directory:

$ cd ipaudit-web-1.0BETA9/compile

Step 4 - Execute the configure script and run make:

$ ./configure
$ make

Step 5 - Become root and execute the make install commands:

$ su -
Password:
# make install
# make install-cron
# exit (Leave root and become ipaudit user again)
$

Step 6 - Now you will need to edit /home/ipaudit/ipaudit-web.conf

#
LOCALRANGE=127.0.0
#

#
INTERFACE=eth1
#

Change the LOCALRANGE variable to your local subnet on the inside of your network. Also be certain to set the INTERFACE variable to the interface that you have setup to capture the desired traffic on your network.

Step 7 - Add the following lines to your Apache httpd.conf file if they do not already exist:


AllowOverride All
Options MultiViews Indexes Includes FollowSymLinks
Order allow,deny
Allow from all



Options +ExecCGI -Includes -Indexes
SetHandler cgi-script


Note that your Apache server may already contain configuration similar to the above for the "/home/*/public_html" directory. If you do not plan to use the Userdir module for anything other than IPAudit, it is suggested that you comment out the original configuration and replacing it with the configuration above.

Your Apache server will need to support SUEXEC, Mod_Perl, and Mod_Userdir. Once you have modified the Apache configuration restart your Apache server. For more details on the IPAudit-Web installation, refer to the INSTALL file located in the installation directory of that package. It contains more information about the required Perl module Time::ParseDate, SUEXEC, and password protecting your IPADUIT-Web installation. Since is requires just moderate Google hacking skills to find other peoples IPAudit installations, protecting IPAudit with a password would be a very good idea.

Step 8 - Check your installation

Open a web browser and go to:

http:///~ipaudit/

If your installation was successful you should now see a screen like the one shown below in Figure 1.


Figure 1. Running IPAudit's web interface for the first time.

You should make certain that the time on the server running IPAudit is correct and being kept up to date using NTP. Without accurate time, IPAudit will get confused if the time on the packets differs greatly from that of the system time.

After the first half hour mark, IPAudit will begin to graph all of your traffic and generate some reports. The screen should then look similar to the one in Figure 2.


Figure 2. First graph appears after 30 minutes.

The graphs will get more interesting as time goes on and as IPAudit sees more traffic. A "spike" in the graph typically denotes an indication of a problem, such as a host sending out a DoS (Denial of Service) attack.
General reporting
IPAudit's "Network Reports" are useful for many reasons. The thirty-minute and daily reports are exactly the same, except of course for the timeframe. By clicking on the link labeled "-last-" next to the "30min" link you will see the report for the last 30 minutes. At the top of the screen you can see general network statistics, which is good if you are trying to keep tabs on your total bandwidth utilization. This is followed by the busiest local hosts report, which is good way to keep an eye on who is transferring the most data into or out of your organization, as shown in Figure 3.


Figure 3. Displaying the busiest hosts on the local network.

Your servers, such as SMTP mail servers, will typically be close to the top of this list (in addition to your P2P hosts, if you allow that application). Over time you will develop a baseline of your busiest hosts. When checked everyday, you may notice a new host occupying the top busiest host and this would be cause for you to investigate.

The busiest remote hosts report tells you who on the Internet you are transferring the most data to and from.


Figure 4. Displaying the busiest remote hosts.

Typically these tend to be Akamai caching servers, Windows Update IP addresses, and other popular web sites like Google or Yahoo. If one of the sites listed resolves to something unfamiliar like www.evil.com, it should be cause for alarm.

The next report is the, "Possible Incoming Scan Hosts," which shows the IP addresses of the hosts that connected, or tried to connect, to the most unique IP addresses on the local subnet. This report is useful to see who is scanning your network, and what ports they are scanning for.


Figure 5. Top remote IP addresses scanning your network.

It is good to check this table everyday when monitoring a network. It is useful to take the most common ports that the network is being scanned for and research them. The following web sites are useful when determining what applications correspond to the ports attackers are scanning for:

* SANS - From the main page you will find a port search function. This reads from the Dshield database. This page also offers the most up-to-date information on port-scanning trends and general blackhat activity.
* The official port assignments database.
* Google - When in doubt, use Google to find information about ports, for example "tcp port 6881" to check for known Trojans that frequently use a given port.

Port-scanning activity is sometimes due to a new network scanning tool being released (like scannssh), or a new virus or worm that is being circulated. Having this information, it is good to warn the user population if the threat warrants that level of notification. An administrator can then target his notifications towards specific groups. For example, if the network is being scanned for MySQL instances, you should notify the server group and tell them to make certain they have applied all relevant patches and to not expose their servers to the Internet if it can possibly be helped. Oftentimes, you can correlate vulnerability or exploit releases with the portscanning attacks on your network. While you probably have a firewall that blocks these attempts, what if the firewall becomes mis-configured because of a firewall policy change? These reports allow you to react to threats against your network in an informed manner, adding another layer to your network security infrastructure.

Possible outgoing scan hosts are listed next. While the possible incoming scan hosts can be used for proactive measures, the following outgoing scan hosts report is more useful for reactive measures.


Figure 6. Outgoing scan hosts are useful for discovering Trojaned machines.

If I find hosts that are on the inside of the network scanning outwards, it is usually an indication that a machine has become compromised with a worm or virus, and in some cases an actual attacker has taken control of the host and is using to scan for other machines. When you begin to check the reports on a regular basis you will be able to develop a baseline and know what is normal on your network with regards to the number of hosts contacted in a given day. Some hosts need to contact numerous other unique hosts, such as SMTP relay and DNS servers. However, a typical user's workstation usually does not normally contact upwards of 1,000 different hosts on the Internet.



The port that your local hosts are scanning for is significant as well. A machine scanning out to the Internet on port 445 (Windows CIFS) or 6667 (IRC) should raise a red flag and cause you to investigate it as if it were compromised. Port 445, SMB CIFS, is a common port being scanned for on the Internet due to the number of vulnerabilities associated with it. IRC is typically used as a communications mechanim for compromised machines, more commonly known as botnets. However, a machine scanning out on ports 6881 (BitTorrent) or 6346 (GNUTella) would be an indication that the host is running a P2P networking application, which commonly scans the Internet looking for other P2P enabled hosts. The policy within your organization should dictate if this is acceptable behavior or not.

The busiest host pairs table is the final report. It lists which hosts had the largest single transfers between them. It's a good idea to take a look at this list and make certain that the transfers seem normal or not. Normal behavior would be someone downloading a Linux ISO image, whereas less normal behavior could be someone downloading pirated media from an already compromised host.

Going back to the main IPAudit page, you will notice even more reports that you can run. The client/server report can be useful for monitoring who is running the following services on your network:

* HTTP Servers
* Mail Servers
* SSH Servers
* Telnet Servers
* HTTPS Servers

I typically check these reports on a weekly basis to get an idea of who is running what server services on the network. A red flag could be a user workstation that ends up in the top ten SMTP servers listing. This could indicate that the host has been infected and is being used to distribute SPAM. The listing of HTTP servers is useful to see not only who may be running legitimate web servers on your network, but it can also be an indication of anyone tunneling other protocols with HTTP and running it over port 80 or 443 TCP. Since IPAudit only looks at IP and transport layer information, it will not distinguish between actual HTTP traffic and tunneled traffic (which can actually be good in this case).

The traffic type, weekly, and monthly reports all contain summary information about your network. They should be checked weekly to get an overview of what networking protocols are in use, and which hosts transmit and receive the most data. Host reports contain much of the same information as the weekly and monthly reports, except on a per host basis.

The log searching feature is an excellent way to find certain traffic types using multiple criteria, as shown below in Figure 7.


Figure 7. Searching IPAudit's logs.

You can adjust your query to a specific time period, right down to the minute. The IP address can either be a host on the local network or the external network/Internet. The local port is relative to the local address space you specified in the IPAudit-Web configuration file, as is the remote port. The next two fields, Max Lines Displayed and Print Increment tell IPAudit how to print out the query. It is best to start with a low number for the line displayed the first time you run a query, just in case there are thousands of results which could take some time. The session size is a particularly useful field when trying to determine traffic type. Sometimes you want to distinguish between actual data transfers and just portscanning. By playing around with the values in these fields you can do just that (for example, suppose you want to know who actually connected to the MySQL server, not who scanned it). The protocol drop down menu allows you to choose between TCP, UDP, and ICMP. IPAudit tries to keep track of state by indicating whom the first talker was in the connection.

Overall, IPAudit has many useful features and many ways in which to look at your network traffic. The next section will go into more detail on how to use it to detect compromise machines on your network.
Detecting compromised hosts
Similar to an IDS, IPAudit is a historical account of your network traffic. If an exploit comes flying into your network and is picked up by your IDS, it happily logs it. When you go to check the logs you can see this event, including the full packet, and you may say, "Yup, that was an exploit alright, I wonder if it was successful?" IPAudit works in much the same way, except you can use it to detect all behavior exhibited by the potential compromised host after the exploit was launched. Here is an example:

snort: [1:2351:10] NETBIOS DCERPC ISystemActivator path overflow attempt little endian unicode [Classification: Attempted Administrator Privilege Gain] [Priority: 1]: {TCP} 192.168.1.237:4014 -> 192.168.1.223:135

snort: [1:2123:3] ATTACK-RESPONSES Microsoft cmd.exe banner [Classification: Successful Administrator Privilege Gain] [Priority: 1]: {TCP} 192.168.1.223:31337 -> 192.168.1.45:32768

The above Snort alerts indicate that 192.168.1.237 is trying to exploit 192.168.1.223 using a very common exploit that takes advantage of the MS03-026 RPC vulnerability (See the full Snort rule documentation). We then see a very obvious backdoor attempt, most likely a simple Netcat command such as "nc.exe -l -p31337 -e cmd.exe".

Using IPAudit, let's examine the victim host's traffic. I would first go to the IPAudit searchable host feature, enter the timeframe I want to look at, then the IP address. It produces a report as shown in Figure 8.


Figure 8. Search results for a certain timeframe and IP address.

The above data indicated that the host is portscanning for port 445. First, we see that the same source port is used to connect to multiple different destination hosts. In normal TCP communications, a different source port would be used when connecting to a different host. Second, we see many attempts to port 445 on a class B network, with little data being transferred. Also, if we look at the column labeled "First Talker" it indicates that the host on the local network initiated the connection. The "Last Talker" column is blank, telling us that 192.168.1.223 sent out the packets, but received no responses. These are all telltale signs of portscanning.

What if you want to see what happened in addition to the portscanning? If someone did in fact compromise this host then they most likely uploaded some sort of rootkit or IRC bot. Let's take the IP address of the machine that opened the backdoor on our victim host and see what other machines it connected to that day, as shown in Figure 9.


Figure 9. Search showing potentially compromised IP connecting to other machines.

Here we see it connecting to our known victim host and transferring data on port 4000, among others.

After further analysis we see a similar transfer to another host on our network, 192.168.111.69, as shown in Figure 10.


Figure 10. Similar traffic shown with another machine.

The backdoor port is different, but the host is in fact compromised in the same way as 192.168.1.223. This can be verified in the IDS logs:

snort: [1:2123:3] ATTACK-RESPONSES Microsoft cmd.exe banner [Classification: Successful Administrator Privilege Gain] [Priority: 1]: {TCP} 192.168.111.69:2143 -> 192.168.1.45:32768

Using IPAudit we can then continue to map the scope of the compromise. This includes all machines that have become compromised, which servers attacked them, which servers are controlling them via backdoors, and which IRC servers they logged into. We do this by modifying our search criteria to map connections between all hosts involved.

The incident described above was based on a real incident, but was also recreated in a lab. The real incident involved a dozen compromised computers, two IRC servers, an attacking host, and a remote shell host. It was all mapped using IPAudit and correlated with Snort.
Conclusion
IPAudit is a great addition to your network monitoring. It provides reports that give you an overview of your network, inform you of security events, and report on anomalies. When used in conjunction with intrusion detection a security incident can be mapped out in a great deal of detail. Best of all, IPAudit is a free tool that is easy to setup and maintain. You will find it useful to install on all your IDS sensor installations.
About the author
Paul Asadoorian, GCIA, GCIH is the lead security engineer for a large university in the New England area where he designs, implements, and maintains intrusion detection systems, firewalls, and VPNs. He gives regular presentations in the academic community relating to network security. Paul is also the founder of Defensive Intuition, a security company specializing in security auditing, penetration testing, and other security related services.

Source : http://securityfocus.com

Read More......

Standards in desktop firewall policies

The idea of a common desktop firewall policy in any size organization is a very good thing. It makes responses to external or internal situations such as virus outbreaks or network-oriented propagation of viruses more predictable. In addition to providing a level of protection against port scanning, attacks or software vulnerabilities, it can provide the organizations local security team a baseline or starting point in dealing with such events.

The purpose of this article is to discuss the need for a desktop firewall policy within an organization, determine how it should be formed, and provide an example of one along with the security benefits it provides an organization.
The Problem
The trick to a good desktop firewall policy is to provide a balance between security and the networking requirements of the applications needed by the organization. It's possible the organization may not yet have a complete knowledge of these requirements. This should make the first attempt to define a standard/global policy interesting, depending on the level of protection one is trying to provide and the situation or environment the desktops may be in.

One thought on an initial policy is to provide a port-based firewall with all inbound ports blocked on the desktop. On the other hand, an old school of thought might involve one blocking only the ports that need to be blocked, by estimating software network requirements and then combining this with an effort to also block the most obvious of possible vulnerabilities or services. Evaluating FTP, Windows IIS or NetBIOS requirements might provide a first pass at a standard global policy. Our old school of thought again would leave the balance tipped toward the (as yet unknown) network requirements of the software, and less toward protection. In other words, offer functionality over security. While providing consistency, cases where the desktop (or laptop) is located off site may not fully satisfy security requirements of the organization.

Location awareness may be a feature of the desktop firewall that one could use to design a policy that changes to better fit a user's location. Some personal firewall solutions provide location awareness as a feature. Location selection could be automatically selected depending on a successful Windows domain login, specific IP address, DNS server address, network adaptor type, or it could be based on the client firewall's ability to connect to a policy manager.

If location awareness is not a built-in feature of the firewall, the policy could be designed around the organization's internal IP address range or, if available, be configured around the DNS domain name. For example:

allow all inbound *.someorganization.org
Issues with a "block all" inbound policy
A block all inbound policy while connected offsite would seem to present the least amount of risk, but might not be completely possible while onsite. The first issue may be caused by the firewall itself. Depending on the vendor, characteristics of the firewall may impact application functionality while using a block all inbound policy. This may include UDP, complex protocols like FTP, NFS, applications running in a service mode, and problems with a Intrusion Prevention System if one is provided with the firewall. Each of these issues will be discussed below.

UDP, being a stateless protocol, is difficult for any firewall to handle. A simple UDP based service may run on port 1313, or example. The UDP client (running the desktop firewall) would attempt to connect to 1313, and assign a port for a reply. There may or may not be a reply; if there is, it won't be easy for the firewall to determine whether or not to allow it. Either the firewall needs to attempt to keep state of all outbound UDP traffic on its own, or UDP port requirements must be known and the firewall must be configured to allow the reply on a case-by-case basis.

An example of a facility requiring UDP might be printer or scanner client that issues a UDP broadcast and then awaits a reply. That reply would come from a scanner or printer the user may want to access, and it might include its status or availability.

FTP could cause another possible issue with the firewall. In some cases the firewall may not support active FTP, which is unusual as Microsoft Windows doesn't support passive mode. Active FTP is where the ftp server will initiate a connection back to the client to do the actual transfer of data. Oddly, FTP is still used and sometimes even embedded within other software. Fixes for active FTP on firewalls can be ugly and may end up being one of the first application-based rules.

Applications running in a service mode can have one of two solutions: either the firewall requires an application-based rule where the application's network access is restricted to predefined ports, or one can simply allow the open port, possibly with some other restricting criteria. Restrictions by IP address or time of day are possible as well, and may be desired.

An Intrusion Prevention System may be an additional feature of a desktop firewall within an enterprise. This would allow the firewall to detect possible attacks by examining the inbound packet and matching data and port usage against a list of known attack signatures. The IPS may be configured to respond by blocking the inbound packet or allowing it and sending an alert. False positives on a firewall supporting IPS could mistakenly block inbound traffic and would need to be analyzed and adjusted on case-by-case basis. Logging the event and allowing the traffic may be the quickest and easiest way to deal with false positives.
The Environment
In this part of the article, we detail what is needed to create an environment where software requirements are known and our corporate standards are enforced:

1. a desktop firewall This is the tool used to enforce restrictions on network access by limiting port and protocol access. The firewall should limit the user's ability to change its configuration, yet provide enough function such that the user can identify issues that may be caused by the firewall policy. The firewall should support port- and application-based filtering.
2. A security policy This will define what is or is not permitted to or from the network, on a standard desktop. Typically this would be generated by a high-ranking security group or set of officials in the organization, and would be generalized into a non-technical document (it could be as simple as block all inbound rule).
3. Knowledge of existing port requirements or a baseline of requirements These would be taken standard or default desktop operating system configuration used in the organization. Typically an organization would have an install tailored to its own requirements, and it may include patches, anti-virus, and common software required by all users. This, combined with the security policy, would form the basic desktop firewall policy.
4. Ability to deploy a single global firewall solution to all desktops This means deploying the solution to all desktops in the organization with a consistent or single policy. Enforcement and tracking of deployment would also be necessary.
5. Facility to provide and update the firewall policy Some firewalls can be centrally managed directly. Depending on the needs or structure of the organization, the minimum requirements would require a common/global firewall policy that can be updated, for example through the replacement of a configuration file. Obviously some form of central software management would need to be in place.
6. Large plastic bat to handle upset users
7. Tools to aid in the analysis of the networking requirements For example, this might include Ethereal for monitoring traffic, the ability to analyze firewall logs, Perl scripts to test firewall rules, Nmap for port scanning, and so on.

"Software Networking Standards" – A potential benefit
If the organization knows the networking requirements of its applications, a policy could easily be created. Then the idea of software networking standards could be enforced through the policy.
An example
In order to provide a firewall policy for the examples below, let's first assume that a policy is designed and configured to block all inbound TCP/UDP, and allow all outbound TCP/UDP. We will also assume the firewall does not properly handle outbound UDP or complex protocols such as FTP. Some known software requirements in this environment may be obvious, for example support the organization permits file sharing. This would require inbound TCP port 445 open . A rule is created to support inbound 445 and also restrict the rule to a range of IP address (192.168.4.0 through 192.168.20.255 in this example, with the understanding that this private IP address range could be used by other organizations such as hotels as well, creating a potential hole for traveling users). Finally, ICMP is allowed for troubleshooting. A sample policy might thus be configured to:

* Allow all inbound and outbound ICMP
* Allow inbound TCP 445 from hosts 192.168.4.0 – 192.168.20.255
* Block all inbound TCP
* Block all inbound UDP
* Allow all outbound TCP
* Allow all outbound UDP

Let's now look at the benefits of using our sample policy.


Benefits of a desktop firewall policy

* The ability to predict the impact of security-related events is enhanced. An event could have many characteristics and take on many different forms. If some of those characteristics involve network port access, the policy may offer an initial form of protection. In addition, network-oriented responses to these events become more predictable. For example, the application of router and network firewall ACLs are sometimes used to deter the propagation of virus and worms. The problem is, the implementation of ACLs could impact production software, in cases where both applications and a security event have similar port requirements. Depending on the characteristics of the event, the example policy may make ACLs unnecessary on some network segments.

* Provide consistent software solutions (as opposed to multiple solutions that provide the same function). Two departments requiring a similar service may deploy two different software solutions. While it is best that departments in any organization coordinate development and deployment in software solutions, the reality is, this doesn't always happen. The policy defined above offers some hurdles for new applications. If the policy happens to conflict with the network requirements of the application a request for a policy enhancement would be required. At this point, if not already, the application becomes known to the organization.

* Restrict the ability for network-oriented programs from hitting the desktop until evaluated. Again, the policy may offer some new hurdles for applications, depending on their requirements. A recent example could be Microsoft's Activesync 4.0 software. The example policy above would require modifications, which could carry the concept of being loose or tight. (Visit Microsoft's Activesync page for the requirements.) The policy impacts the application in several areas: inbound port requirements, backend network construction, and these involve the use UDP along with TCP. A modification of the policy may include a fairly tight rule that binds the local ports to the application for the backend network only, such as:

allow 169.254.2.1 inbound access to the { required ports } AND { executables }

Analysis of the application through the use of Nmap can verify the port requirements on the backend network, but also reveals activity on the primary network. In this case a ‘status' port that is TCP 999 becomes active on the primary network when the handheld that uses Activesync is cradled. In theory one could execute a single port scan against port 999 on a subnet and identify all IP address which currently are ‘syncing' a handheld. Depending on the firewall internals and given the policy defined, Nmap may indicate ‘closed' for port 999. Some firewalls can be configured to drop an inbound packet for a port that is blocked, which would return nothing in this case.

* Restrict the use of service-oriented software. Individuals involved or concerned with security have to be interested and even frustrated with this. Software running on an ordinary desktop (as opposed to a ‘server') that requires a port used for listening could be susceptible to coding errors allowing inbound access or backdoors. They should be avoided.

* Software using unusual protocols will become known (such as systems using the streaming protocol IGMP). While the use of protocols other than IP isn't itselft an issue, it's an advantage to know they are in use. Some firewalls will not pass these protocols, and isolation of their use could be difficult. It's now common for the software provider or vendor to make their networking requirements available for organizations supporting a desktop firewall.

* Track the use of broadcast-oriented software which usually runs as UDP. The example policy in this article would disable the response to a UDP broadcast. A good standard for any organization is to define service-oriented equipment, such as printers and scanners, using static IP addresses, and make the user aware of the names and IP addresses of these facilities that are in their area. The security issue in this case is that the service could be spoofed. A phony print server could be created to capture and forward printouts to the actual server.

* Track the use of backend networks or dual-homed machines. The example policy may reveal a backend, depending on what it is being used for. The use of backend networks won't directly cause security concerns, but their existence and use should be identified. For example, asset and patch management could be impacted, and real vulnerability assessment would also not be possible.

* Software and desktop support can be impacted and simplified. The example policy offers some limitations on what software can do on the network. Software requiring modifications to the policy obviously becomes known, and the specific policy modifications would help create a consistent deployment.

* The example policy would help in the enforcement of the organization's security policies or detection of software which might break this policy. For example, it may be part of the security policy to prohibit the use of database, web, ftp or P2P servers on ordinary desktops. The policy in this example would block those services.

* A global policy could help enforce an organizations specific standards; such as the use of a remote access VPN or streaming media solution. The example policy would most likely require modifications to support VPN. Typically the software requirements of VPN would differ between vendors as well.

* The policy could be used to limit access to services running over non-standard ports. For example, assume that only minimum outbound internet access restrictions are in place and a policy and mechanism exists to monitor and log Internet web access. Typically web access is done using TCP port 80. However it is possible for a user to access an external anonymous web proxy (such as www.proxyblind.org; there are many others) that may run on a port other than 80. This usage would bypass logging and allow the user to surf the web anonymously. A modification to our example policy restricting iexplorer.exe to outbound TCP port 80 could be created. Limitations on other ports commonly used to support anonymous web proxies could also be created (for example, these are often found on TCP ports 3128, 8000 and 8080)

Summary
A common desktop firewall policy could lead to, or help in the enforcement of, software networking standards. If this is something an organization wants, there are clear benefits. Depending on whether the organization is running a firewall with a consistent policy or not, networking standards at some level may already be enforced. New applications may or may not be compatible with this policy, and changes or modifications would need to be requested. Those who deploy new software may need to be a bit more familiar with the network requirements of their software, to be able to adhere to policy.

The desktop firewall, typically just one piece of desktop security, often is combined with patch management, anti-virus and software deployment/management facilities to form a complete security solution. As part of that solution, the desktop firewall's job is to simply block network traffic and detect attacks. Yet the reality is, it can do more than this although added features may not be quite as tangible as the supplying desktop protection.

The implementation and maintenance of a desktop firewall can be a stressful and frustrating experience – particularly for those organizations who do not have a full understanding of their own network requirements. It can cause existing software to become disabled. It could require deployment dates to be extended due to additional development time required to isolate compatibility issues. It may require additional resources or steps to get software to the desktop.
Conclusion
In this article we discussed the need for a desktop firewall policy within an organization. It was discussed how such a policy should be formed, and then an example was provided – along with a detailed discussion of the security benefits it provides an organization.

An old school of thought would resist any restrictions placed on internal network access. But today the stakes are a higher, and security is paramount. At some point in the history of networked computing, an organization has become more accountable for its network traffic and legality of the software it chooses to run. Not many options are available for limiting the use of the network (beyond simply blocking it at the usual choke points, which doesn't allow for the controlling of specific applications). This approach needs to change, as more and more attacks and security concerns come from the soft underbelly of the organization's internal network.

Source : http://securityfocus.com

Read More......

Sunday, August 26, 2007

Antivirus Concerns in XP and .NET Environments

After Windows NT was released, it took virus writers five years to learn how to infect it. Windows NT 3.1 and the Win32 API were released in late 1993, but it wasn't until August 1998 that W32.Cabanas became the first NT virus by capturing coveted kernel mode access. .NET and some of Microsoft's other initiatives have not been as lucky. The purpose of this article is to discuss antivirus (AV) concerns with .NET and Microsoft Windows XP.

.NET Review

.NET was officially announced by Microsoft in July 2000 at a Microsoft Professional Development Conference. Since then, what .NET has meant and the products involved have changed (and been renamed). .NET is an idea and a programming platform. The basic concept is an evolving extension of Microsoft's Object Linking Embedding (OLE) introduced back in the early days of Windows 3.0. OLE allows you to copy objects and data created in one application, like a spreadsheet graph, to other applications. OLE evolved into ActiveX objects, which are executables you can download and run within an Internet browser.

.NET takes it two steps further by allowing the entire application to be hosted elsewhere (potentially allowing your environment to follow you, no matter where you go) and allowing different distributed software parts to make up one application. For example, your Windows desktop settings, your applications, and your data may be available to you where ever you compute. Running by an Internet kiosk in an airport? Just login and access your desktop and your data. Different applications will co-exist together, over the web, to bring you that integrated environment. One vendor will handle the login and authentication, another will store your data, and each of your applications will be made up of specifically customized components. I'll take two thesauruses, a math equation editor, and a French translation dictionary please. Hold the autocorrect.

All of this magic happens because of new distributed .NET programming platform and a horde of new Microsoft developer tools and languages: C# (C Sharp), Visual J#, VB.NET, Visual Studio .NET, ASP.NET, increased reliance on XML, and a host of other new programming tools and platforms.

The .NET execution framework reminds many people of Java's model. In order for a Java applet to run, it must be executed in a Java Virtual Machine (JVM) environment. .NET executables (regular Windows 32-bit Portable Executables) run on top of a similar environment called the Common Language Runtime or CLR. This is what you are installing when you install the Microsoft .NET Framework component. The CLR runtime engine performs security checks, does type checking, checks memory pointers, loads other component dependencies, and Just-In-Time (JIT) compiles the platform-independent source code into executable code. And further, there are intermediate source code representations (called Microsoft Intermediate Language or MSIL), class files, class loaders, and separate treatment between trusted and untrusted code. Untrusted code is sandboxed and prevented from accessing or risking system resources. This should sound a whole lot like Java to anyone.

I bring up this comparison because .NET is more complex than Java, and complexity doesn't mix well with security. I often hear that Java is very secure because it has only had one widespread in-the-wild exploit. I love Java and the people who designed it did so with security as top priority. But the truth is that Java has had dozens of security holes patched since its release. Just because the white-hatters are the ones finding them doesn't make it a secure platform. Many Java exploits have been found by breaking assumptions between its mesh of interoperating components. See, in order for Java security to work, all the components must work 100% of the time. If one fails, they all fail. Because .NET's execution model is roughly similar, it isn't a hard stretch to believe that many holes will be found in .NET.

Web Services

Web services are the reason for all the complexity. Web services are XML applications, interfaces, and data, designed to be shared across multiple platforms around the Internet. A web service might be a single application hosted by an Application Service Provider (ASP) or it could be a combination of several different vendor's web services making up one application experience for the user. For example, consider a typical online transaction such as buying a pair of jeans. You may use one web service to authenticate your login to the manufacturer's web site, another to help get you the perfect fit, and another to determine delivery details and payments.

Microsoft's Passport was the first example of a web service. Passport allows you to use a single login name and password for all web sites that support Passport authentication. It has tens of millions of users and it has had a series of security issues over the years. In one such instance in May of this year, it was discovered that a remote attacker could send a rather trivial, malicious URL to hotmail.com, be able to change anyone's password and take over the passport account. Maliciously altered Passport accounts can be used to buy goods online and to view confidential data.

The idea that a single, widespread web service with a vulnerability that can immediately expose tens of millions of people to new threats has security experts paying attention. Today's conventional worms and viruses are infecting millions of computers in ten minutes. But a crafty web service worm could potentially conduct millions of falsified commercial transactions in a matter of minutes, something a MS-Office macro virus can't hope to do.

The complexity and popular use of .NET's execution model worries security experts. The widespread sharing of applications, code, and data around the Internet is bound to culminate in interesting future exploits. Lucky for us so far, .NET exploits have been limited to some 'growing pain' problems with Microsoft Passport and a few worms and viruses.

.NET Viruses

There are already at least three .NET worms and viruses: Donut, Serot and Sharpei. Donut, discovered on January 9, 2002, was the first .NET virus. Sent only to researchers as concept malware, the buggy Donut attempts to infect all the .EXE files in the current folder and up to 20 folders above it. It contains a never-executed payload display message and only a small amount of MSIL code. It is mostly normal 32-bit assembly language and the .NET files it infects are turned into regular looking PE files. Donut was the first .NET virus, but it only had a short lead on the others.

Donut was quickly followed up by the Serot, worm which arrives as an impersonated email from support@microsoft.com. It infects all .NET (MSIL) .EXE files on drive C: and will attempt to send itself to all email addresses in the Windows Address Book and those it finds in the Internet Explorer cache folder. Like the virus that followed it, Serot contains a VBS file that does the mass mailing effort. This appears to be easier to do in a script language for the crackers than in MSIL. Serot attempts to terminate antivirus processes on infected PCs and contains a plug-in architecture similar to the one successfully used in the Hybris worm.

Then the Sharpei virus was discovered on February 26, 2002. It arrives in email pretending to be a Microsoft patch, MS02-010.EXE. Written in C#, it drops a Sharp.VBS file that sends itself to all contacts in the Microsoft Outlook address book. After messages are sent, the evidence is deleted from the Sent Items folder in Outlook.

Both the Sharpei and Donut viruses are direct action infectors, meaning they execute and do their damage upon running, and then exit until the next execution. All three "concept" programs have their problems and are unlikely to spread far. Antivirus researchers expect the future to bring memory-resident .NET viruses.

Note: Peter Szor, with Symantec, did detailed write-ups on Donut and Sharpei for the Virus Bulletin publication. You can visit www.peterszor.com or www.virusbtn.com for detailed reading on .NET infections.

Because all three .NET malware programs are very buggy and require .NET to be installed, none spread very far outside research laboratories. But a crucial point, that malware writers are ready to exploit the .NET framework, has been proven. It won't be a five-year wait this time. Meanwhile, new features in other Microsoft platforms have raised concern among AV experts.

Windows XP Concerns

Windows XP has an improved model of NT's HAL, kernel, and user mode processes. Overall, with XP and Server 2003, Microsoft has increased the stability and security of their operating systems. True, Internet Explorer and Outlook continue to be the weak points in Microsoft's Trustworthy Computing initiative, but their core operating systems are becoming more secure out of the box. At the same time, Microsoft cannot resist (and consumers demand) new features, and XP has plenty of those. Some have been exploited, most haven't...yet. The next part of this article will briefly discuss the new feature XP sets that concern computer security analysts.

Windows Media Player

It used to be that you only had to worry about malicious executable content. Data was data was data, and it could not be launched as an attack. Times change and data content is often exploited in today's multimedia world. The content itself can be used maliciously, in a buffer overflow or through embedded script languages. Another common ruse is for the file to have a header claiming it is one type of file, but instead it contains something completely different, bypassing security-checking mechanisms. The multimedia program itself is often used for the attack. If the interface allows scripting or "skin" updating, rogue coders can instruct the program to do things that would otherwise be constrained by one of Internet Explorer's security zones.

Microsoft's Windows Media Player is installed by default on every version of Windows. The original release of XP came with version 8.0, although anyone can upgrade to version 9 for free. Several holes have been found with the Windows Media Player over the last few years, and Microsoft has patched them when reported. The older versions of Windows Media Player have more security holes than the newer versions, but many people are hesitant to upgrade because of their bulkiness and the restrictive Digital Rights Management features of the newer versions. To be fair to Microsoft, let's not forget that Flash files, RealPlayer, Winamp, and just about every other popular media distribution content has be found to have one or more exploit holes over the past year. But network administrators would appreciate it if Windows Media Player was not installed by default and upgrades were not offered to end-users via Windows Update when it has been removed on purpose.

WebDAV (Web Digital Authoring and Versioning)

WebDAV is a feature installed on machines with XP or IIS 5, or greater. WebDAV is a HTTP protocol extension that allows users to publish and collaborate on documents that are stored on the web. Contrary to common belief, WebDAV is a popular open standard and not just a Microsoft feature. There have been a handful of exploits against Microsoft's implementation of WebDAV, including DoS and buffer overflows. The biggest problem with WebDAV is that it is installed and turned on by default when most people don't use it. It's a good, powerful collaboration tool, it just needs more security analysis and should not be turned on by default. WebDAV is not turned on by default on Server 2003 and IIS 6.

Remote Desktop Connection

Remote Desktop Connection allows one XP Pro PC to remotely connect and control another XP Pro PC with a PC Anywhere-style session. Remote Desktop, as it is called in the System Control Panel applet, uses Terminal Server's Remote Desktop Protocol (RDP) over TCP port 3389. It is not turned on by default, and so far has not been exploited. Still, knowing that it is installed as an inactive shell on every Windows XP computer, many of which are poorly secured, raises some concerns.

Remote Assistance

Unlike Remote Desktop Connection, Remote Assistance is turned on by default. It allows one XP user to invite, using either email or instant messaging, another XP user to have remote control access over their PC. Besides desktop control, the remote user can participate in chat sessions and transfer files. Invitations can be open for many days, and the default is 30 days. One of the main concerns is that there is no vetting mechanism to guarantee who is who in the remote assistance scenario. There exists the possibility that a malicious remote user may impersonate a tech support person and plant malicious files. While there have been no public exploits using Remote Assistance, AV experts worry about poorly password protected connections and buffer overflow attacks.

Internet Connetion Firewall (ICF)

Microsoft's first attempt at a desktop firewall is laudable, but comes up a bit short. ICF's main deficiency is that it lacks the ability to block outgoing port traffic. Many malware programs, once installed, will initiate outbound communications to continue their maliciousness. It could be a remote access trojan contacting its originating hacker to advertise the successful intrusion or an email worm with its own SMTP engine sending itself out around the world. In either of these two cases, because ICF allows all outgoing requests by default, the end-user will not be warned. Most of today's personal desktop firewalls would stop the request and alert the user. I hope if Microsoft continues to support ICF as firewall product that additional features sets will be added and its usefulness increased. ICF is also installed on Server 2003.

UPnP

Universal Plug and Play is another feature that should be turned off by default. UPnP allows a Windows machine to discover UPnP devices (ex. printers, scanners, etc.) on the network and to auto-configure their use. UPnP ended up being XP's first big publicly touted hole in December 2001. It was a buffer overflow and could be successfully exploited over the Internet, and if a firewall did not block UDP port 1900, it could be used to gain complete control of the machine. Luckily, UPnP is not even installed on Microsoft's latest offering, Windows 2003.

Simple File Sharing

XP the Home Edition has a feature called Simple File Sharing. When a folder is shared, it is immediately accessible to everyone on the local network and no specific permissions can be set. The folder can be set as read-only, but if changes are allowed, full control is given to anyone who can see the folder. AV experts worry that if a virus or worm gets loose on a home network with Windows XP Home, the malware will have no problem traveling machine to machine using network shares

Windows Messenger

Microsoft's Windows Messenger is installed by default on XP Pro and Home editions. Instant messaging (IM) clients open additional avenues for attacks. First, there have been many buffer overflow attacks against instant messaging clients, even when not turned on and only installed. Second, IM clients allow yet another avenue for the unsuspecting Joe User to receive malicious files. Many antivirus programs do not monitor IM file transfers. Third, there are malicious programs and viruses that specifically target Microsoft's IM clients. Although not attacked nearly as much as IRC and AOL's AIM clients, instant messaging is a technology being used before the security is all in place.

Office XP

Although only affiliated with Windows XP by name only, here's a good point to discuss a potential security problem in Microsoft Office XP. One of the most touted features of Office XP is its ability to read and write files in XML format. Macro viruses, which for several years were the number one infection type, have been mostly tamed by Office's macro security and antivirus software. XML has the potential to allow yet another round of new technology viruses into our Office documents. This is because XML is an everyman's language. An XML file is what you define it to be. Besides text, it can contain executable code, scripting, multi-media content, whatever programmers might want it to contain. As has been proven so many times in the past, flexibility and choice increases the risk of malicious exploitation.

I'm sure there are some features I missed that may be exploited in the future, but at the moment these are the main ones garnering increased scrutiny by security professionals.

Windows XP Security

Before this paper ends, I want to point out that security has been strengthened in Windows XP, and much more so in Windows 2003. XP was the first Microsoft operating system to offer a firewall (ICF), and it's better than nothing for the consumer that isn't motivated to install another vendor's personal firewall product. XP has Encrypted File System (EFS), Windows File Protection (WFP), Certificate Services, IPSEC, Kerberos, Software Restriction Policies, and System Restore. All of these additional features fight malicious code and are welcome additions to the Microsoft family. All security reviews of Server 2003 have been positive. More unnecessary features have been turned off by default and file and registry settings strengthened.

Summary

The complexity of the .NET execution platform worries security experts. Once it is widespread, malicious coders will find holes in between the interoperable layers and then execute security exploits. The persuasive nature of web services means that one malware threat could quickly compromise a large number of machines. There are already three .NET viruses and worms. Although they are buggy, future viruses and worms will be able to perform without error as crackers begin to target .NET.

Windows XP contains much new functionality, some of which has been exploited, and other features which have yet to be maliciously explored. XP also contains many new security features, like Windows File Protection and Internet Connection Firewall, which strengthens the OS's response to security threats.

Roger A. Grimes, CPA, MCSE (NT/2000), CNE (3/4), A+, has been fighting malicious code since 1987 and is the author of Malicious Mobile Code: Virus Protection for Windows (O'Reilly). He is a frequent writer and speaker on computer security topics. His next book, Honeypots for Windows (APress) will be available near the end of the year.

Source : http://securityfocus.com

Read More......