|
|
This chapter describes the following topics:
The NetRanger Data Management Package (DMP) provides key operational components of NetRanger:
Together, these services allow NetRanger users to manage security data. The relationship among the parts is illustrated in Figure 7-1.
DMP collects two basic forms of data: high-level event data and detailed binary IP session data (as diagrammed in Figure 7-2). DMP uses two types of log files to capture these types of data:
Event logs are ASCII files written to the /usr/nr/var directory with the naming convention of log.YYYYMMDDHHMM. By default, level 2-5 events are written to event logs on a Director system, and low-level 1 events are written to an event log on the Sensor system. Log files capture data on alarms, command, and errors.
Event alarm data are unique because the record format contains optional as well as fixed components.
Fixed alarm data include the following information:
The optional data field contains information that cannot be described by the fixed portion of the alarm record format. This information may be a string associated with an event, or context data that consists of a snapshot of incoming and outgoing TCP traffic (a 256-byte maximum in both directions).
For example, a typical line written to a log file would look like the following:
4,1025294,1998/04/16,16:58:36,1998/04/16,11:58:36,10008,11,100,OUT,OUT,1,2001,0,TCP/IP,10.1.6.1,10.2.3.5,0,0,0.0.0.0,
Each comma-delimited field represents a different type of data. The first value tells you what kind of record is being logged. The following example log file references values and record types.
| Field Value | Record Type |
|---|---|
0 | Default |
1 | Command |
2 | Error |
3 | Command Log |
4 | Event |
5 | IP Log |
6 | Redirect |
In the example log file on the previous page, 4 denotes an Event record. The following table provides a reference for the rest of the fields in the sample Event record.
| Sample Field Value | Field Type |
|---|---|
4 | Record Type |
1025294 | Record ID |
1998/04/16 | GMT Datestamp |
16:58:36 | GMT Timestamp |
1998/04/16 | Local Datestamp |
11:58:36 | Local Timestamp |
10008 | Application ID |
11 | Host ID |
100 | Organization ID |
OUT | Source Direction |
OUT | Destination Direction |
1 | Alarm Level |
2001 | SigID |
0 | SubSigID |
TCP/IP | Protocol |
10.1.6.1 | Source IP Address |
10.2.3.5 | Destination IP Address |
0 | Source Port |
0 | Destination Port |
0.0.0.0 | Router IP Address |
Two other types of records, Error and Command Log, are similar in structure to the Event record. Both of these record types have the same first eight fields contained in Event records (Record Type, Record ID, GMT Date and Timestamp, Local Date and Timestamp, Application ID, Host ID, and Organization ID).
An Error record type has a ninth field denoting the actual Error String generated by a service. The Command Log record type also contains fields for the source Application ID, Host ID, and Organization ID, as well as the Command String. In both cases, you will have a complete record of errors and commands.
IP Session logs capture all incoming and outgoing TCP packets associated with a specific connection, and therefore contain binary data. They are written to a Sensor's /usr/nr/var/iplog directory with the naming convention of iplog.source_IP_address.
In addition to detecting attack signatures, sensord and packetd are able to monitor the traffic associated with a specific type of attack. For example, sensord can be configured to monitor all of the packets associated with an IP spoof. sensord or packetd creates a separate log file in /usr/nr/var/iplog for each of these monitoring sessions. The name of each session log file is based on the IP address of the attacking host (for example, iplog.10.145.16.152).
There is a performance penalty associated with the logging of context data. DMP provides two options for controlling overhead:
By default, level 1 Event and IP Session logs are left on a Sensor until they are required. This prevents the relatively large amounts of data associated with these types of events from impeding NetRanger communications during periods of network load.
Even though the DMP is designed to transfer NetRanger data into industrial strength databases, both types of data are initially written to flat files for two reasons: speed and fault tolerance. Data can be written to a flat file much faster than to a database, and access to flat files is guaranteed as long as the host system is operational. Databases, on the other hand, are subject to unpredictable load, and contain too many access layers to be truly fault tolerant---if the database is down, there are no places to turn.
NetRanger uses a simple push-pull process to migrate data from flat files to a database or data archive. Figure 7-3 diagrams the push-pull process for both types of log files.
The push process involves the loggerd service to write event, command, and error notifications into a single flat file in /usr/nr/var. This file is serialized based on configurable size and time thresholds set via nrConfigure. Once the current log file exceeds one of these thresholds, it is replaced by a fresh log file and moved to /usr/nr/var/new.
The pull process involves the sapd service, which relies on its own size and time thresholds. sapd pulls the oldest file from /usr/nr/var/new into /usr/nr/var/tmp and executes database load procedures specified by the user. After the log file is successfully loaded into a database, it is placed in /usr/nr/var/dump where it is processed by a user-defined purge process.
This basic push-pull process applies to IP session logs as well as event logs. The primary differences are that IP session logs are written to /usr/nr/var/iplog rather than /usr/nr/var/new, and they are automatically placed in /usr/nr/var/dump after a configurable amount of time.
After data is loaded into a database, it can be analyzed for patterns and trends. Status reports relating to network activity and vulnerabilities can also be generated. Although any number of different third-party tools can generate these types of output, DMP is shipped with a small but comprehensive collection of Oracle SQL*Plus queries that show how event data can be analyzed relative to different perspectives, such as time versus events.
This section provides information on the following:
This section describes the following topics:
Before generating or customizing reports with SQL queries, you need to learn how to configure and view data management status information. This information gives you insight into the following indicators:
You can view File Management status in Brief or Full mode. Brief mode provides a summarized view of the FileMgmt information, and includes overall status and uptime, a summation of errors, and a trigger history. Full mode is a superset of Brief mode, and also includes information on scheduler status, normal summation, and trigger configuration.
To set for your File Management viewing mode, follow these steps:
Step 1 On the Director interface, click Configure on the Security menu.
nrConfigure opens.
Step 2 Double-click Data Management.
The Data Management dialog box opens (see Figure 7-4).
Step 3 Click the General tab.
Step 4 To change the viewing preference to Brief, make sure that Verbose DMP Display is not selected.
Step 5 Click OK to close the Data Management dialog box.
Step 6 Click the transient folder and click Apply.
Step 7 After you set your File Management viewing mode, click Show>Database Info on the Security menu to display the following information:
On the Overall Status Display (see Figure 7-5) an Overall Status line (the 1st line in the status display) indicates that sapd keeps track of its overall status; that is, if a launched action returns indicating an error condition, sapd knows that an error has occurred. If an error occurs, the Overall Status line reads something like the following:
Overall Status = ERROR Oracle_Load ;
If an error happens, sapd's status will remain in an ERROR state until one of the following occurs:
Once reset, the Overall Status line will look like this:
Overall Status = Normal
The Trigger History and Configuration sections (see Figure 7-6 and Figure 7-7) provide information on recent triggers run, and the configuration status of triggers.
The information summarized in the Trigger History section consists of the following:
The information summarized in the Trigger Configuration sections consists of the following:
To find out how to change the configuration of triggers, refer to the "Configuring Data Management" section of "Configuration Management."
The Directory Summary section (see Figure 7-8) provides information on each directory in /usr/nr/var, the disk partition in which NetRanger security data is logged and staged to third-party database or storage tools. The display summarizes the following information for each directory:
The Action Configuration section (see Figure 7-9) displays the current configuration of all database actions. The display summarizes the following information:
To find out how to change the configuration of actions, refer to the "Configuring Data Management" section of "Configuration Management."
NetRanger ships with SQL queries that can be run automatically by NetRanger or interactively by a human operator. The SQL queries themselves are the same in either case, but the entry methods differ.
Interactive queries are run by a human operator in the SQL+ environment, and are distinguished by the "@" character at the start of each query command. The four commands are: @event, @space, @time, and @system.
Automatic queries are run in batch mode by NetRanger. Information on when they are run and what the output looks like can be set in NetRanger's configuration files and in the Oracle report templates.
This section describes the following topics:
The queries shipped with the DMP generate basic reports, which are accessed via the following interactive SQL entry points:
In turn, each of these queries provides access to a number of different subqueries, such as event1.sql, event2.sql, and so on. The query you run depends on the answers you give to the initial (event/space/time/system) query. Basically, the greater the number of the subordinate query, the more detail is returned.
Table 7-1 lists the SQL user commands and subordinate queries.
| User Command | Query Type | Subquery Name | Definition |
|---|---|---|---|
@event | Event Dimension Query | event1.sql | alarm level summary |
event2.sql | alarm signature summary | ||
event3.sql | alarm signature with string data summary | ||
@space | Space Dimension Query | space1.sql | source signature summary |
space2.sql | destination signature summary | ||
space3.sql | connection signature summary | ||
space4.sql | connection signature with string data summary | ||
@time | Time Dimension Query | time1.sql | timeview signature summary |
time2.sql | timeview source signature summary | ||
time3.sql | timeview destination signature summary | ||
time4.sql | timeview connection signature with string data summary | ||
@system | System Query | system1.sql | age and count of records in each table |
system2.sql | signature: name-number (interactive only) | ||
system3.sql | organization: name-number (interactive only) |
Example 7-1 illustrates the interactive component of an event query.
SQL> @event EVENT DIMENSION QUERIES ================================================ 1 alarm level summary 2 alarm signature summary 3 alarm signature with string data summary These queries summarize Sensor alarms with a primary focus on the EVENT dimension. You can filter the data returned to you with the following criterion:ORGANIZATION_NAME MIN_EVENT_DATE
Please enter your desired value at the following prompts: select query (1,2,3)>1 organization name (%)>% minimum event date (YYYY/MM/DD)>1998/05/01
Typing @event from the SQL> prompt enters interactive mode. You can then select the type of query by choosing one of the available subqueries. For event queries, you can choose an alarm level summary, an alarm signature summary, or an alarm signature with string match summary.
In Example 7-1, typing 1 selects an alarm level summary. Typing the wildcard character (%) for the organization name provides a listing of alarms for all NetRanger organization names. Typing an actual organization name at this prompt will narrow the returned information.
The results of this interactive query are illustrated in Example 7-2.
event1: alarm level summary organization_name = % minimum_event_date = 1998/5/1 Org Name Level From To Count Recent --------------- -------- ---- --- ------- ----------- Data Warehouse 5 IN OUT 3 05:02 08:45 Net Systems 5 IN IN 15 05:02 11:21 Net Systems 3 OUT OUT 4 05:01 19:15 Net Systems 3 IN IN 2 05:01 15:03 Net Systems 3 IN OUT 79 05:02 09:28 Cisco Systems 3 IN IN 231 05:02 09:37 Cisco Systems 3 IN OUT 176 05:02 09:37 Cisco Systems 2 OUT IN 9110 05:02 12:15 Cisco Systems 2 IN IN 18 05:01 14:06
In the example, the alarm level activity from three organizations is profiled. From refers to the source of the alarm direction (in relation to the trusted internal network) and To indicates the destination of the alarm direction. The Count is the number of alarms that match the specific criteria for each line in the generated report. Recent gives a timestamp of the last alarm activity for that criteria.
The NetRanger-controlled SQL queries are set to run in daily, weekly, and monthly batch modes. You can use nrConfigure's Data Management dialog box (see Figure 7-10) to change the scheduling of reports as well as the content and organization of report data.
To change the scheduling of reports, you need to customize the following triggers:
For more information on customizing triggers, refer to the "Setting Triggers" section of "Configuration Management."
To change the content and organization of a report, edit the DAY, WEEK, and MONTH template files in the /usr/nr/bin/sap/sql directory. These files serve as report templates for NetRanger-run batch queries.
Each line in these template files contains a SQL command and its required arguments. For example, in the DAY template, the following line describes an event1 SQL query:
event1 % MIN_EVENT_DATE
This specifies that the event1.sql query is run, with two arguments: the wildcard character (%) for the organization name, and MIN_EVENT_DATE. In the DAY template, MIN_EVENT_DATE refers to the variable $Today-1, which upon execution of the batch report, is replaced by a literal date corresponding to the day previous to the day of report execution.
In the WEEK template, the MIN_EVENT_DATE refers to the variable $Week-1, which upon execution of the batch report, is replaced by a literal date corresponding to the week previous to the day of report execution.
Finally, in the MONTH template, MIN_EVENT_DATE refers to the variable $Month-1, which upon execution of the batch report, is replaced by a literal date corresponding to the month previous to the day of report execution.
You can also create new SQL queries and edit or add lines in the report templates to run the new queries instead of the default ones. If you create an event9.sql query, for instance, you could add the following line to the DAY template:
event9 % MIN_EVENT_DATE
The native Oracle schema supported by the DMP are illustrated in Figure 7-11. Each box on the left identifies how event data is grouped prior to being loaded into a database table. Each box on the right represents the target database tables. The schemas for these tables are defined on the next several pages.
The nr_log_alarm and nr_log_alarm_1 tables contain information about level 2-5 and level 1 alarms, respectively. The schema for the nr_log_alarm and nr_log_alarm_1 tables are defined in Table 7-2.
| Name | Type |
EVENT_PROTOCOL | NUMBER(5) |
RECORD_ID | NUMBER(10) |
EVENT_DATE_GMT | DATE |
EVENT_DATE_LOCAL | DATE |
APP_ID | NUMBER(5) |
HOST_ID | NUMBER(10) |
ORG_ID | NUMBER(10) |
FROM_STATE | CHAR(1) |
TO_STATE | CHAR(1) |
EVENT_LEVEL | NUMBER(5) |
IP_SIGNATURE | NUMBER(10) |
IP_SUB_SIGNATURE | NUMBER(10) |
NETWORK_PROTOCOL | CHAR(3) |
SRC_IP_ADDR | CHAR(32) |
DST_IP_ADDR | CHAR(32) |
SRC_PORT | NUMBER(5) |
DST_PORT | NUMBER(5) |
ROUTER_IP_ADDR | CHAR(32) |
DATA_ALARM | CHAR(64) |
The nr_log_context table contains information on alarm contexts. NetRanger buffers
256 bytes of context data in each direction (incoming and outgoing). However, to accommodate a maximum of three escaped characters per context byte, the field size for the DATA_INCOM and DATA_OUTGO fields must accommodate 768 (256 x 3) bytes. The following occurs during translation:
CONTEXT CHAR=4A PRINTABLE CHAR=J CONTEXT CHAR=4B PRINTABLE CHAR=K
CONTEXT CHAR=5C PRINTABLE CHAR=\\
CONTEXT CHAR=0D PRINTABLE CHAR=\0D CONTEXT CHAR=FE PRINTABLE CHAR=\FE
The schema for the nr_log_context table is defined in Table 7-3.
| Name | Type |
|---|---|
RECORD_ID | NUMBER(10) |
EVENT_DATE_GMT | DATE |
HOST_ID | NUMBER(10) |
ORG_ID | NUMBER(10) |
DATA_MATCH | CHAR(64) |
DATA_INCOM | VARCHAR2(768) |
DATA_OUTGO | VARCHAR2(768) |
The nr_log_tcpconn table contains information on TCP connections. The schema for the nr_log_tcpconn table is defined in Table 7-4.
| Name | Type |
|---|---|
RECORD_ID | NUMBER(10) |
EVENT_DATE_GMT | DATE |
EVENT_DATE_LOCAL | DATE |
HOST_ID | NUMBER(10) |
ORG_ID | NUMBER(10) |
IP_SUB_SIGNATURE | NUMBER(10) |
SRC_IP_ADDR | CHAR(32) |
DST_IP_ADDR | CHAR(32) |
SRC_PORT | NUMBER(5) |
DST_PORT | NUMBER(5) |
ROUTER_IP_ADDR | CHAR(32) |
The nr_log_error table contains information on errors in NetRanger data. The schema for the nr_log_error table is defined in Table 7-5.
| Name | Type |
|---|---|
EVENT_PROTOCOL | NUMBER(5) |
RECORD_ID | NUMBER(10) |
EVENT_DATE_GMT | DATE |
EVENT_DATE_LOCAL | DATE |
APP_ID | NUMBER(5) |
HOST_ID | NUMBER(10) |
ORG_ID | NUMBER(10) |
DATA_ERR | VARCHAR2(256) |
The nr_log_command table contains information on NetRanger commands executed. The schema for the nr_log_command table is defined in Table 7-6.
| Name | Type |
|---|---|
EVENT_PROTOCOL | NUMBER(5) |
RECORD_ID | NUMBER(10) |
EVENT_DATE_GMT | DATE |
EVENT_DATE_LOCAL | DATE |
APP_ID | NUMBER(5) |
HOST_ID | NUMBER(10) |
ORG_ID | NUMBER(10) |
SRC_APP_ID | NUMBER(5) |
SRC_HOST_ID | NUMBER(10) |
SRC_ORG_ID | NUMBER(10) |
DATA_CMD | VARCHAR2(256) |
|
|