The first forensics vendor to develop a remote acquisition and analysis tool

Even though it is a DOS-based tool, it can acquire evidence from partitions using the File Transfer Protocol (FAT) file system, non-DOS partitions, and hidden DOS partitions. This not only includes files that are visible to the file system, but also data that has been deleted, and data that exists in slack space and in unallocated space on the disk. It supports acquisition from hard drives that are larger than 8.4 GB, as well as floppy disks and other storage media. It provides a built-in sector and cluster Hex Viewer to view data, and can create and restore compressed forensic images of partitions.

A beneficial feature of this tool to an investigation is its logging capabilities. You can configure DriveSpy to document how the acquisition of evidence is conducted, including logging every keystroke that you made. The software writes these to a log file, and you can use them later in a report of what actions you took to acquire the evidence.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492768000066

Forensics Team Requirements Members

Leighton R. JohnsonIII, in Computer Incident Response and Forensics Team Management, 2014

Certified Skills That GCFEs Possess

Digital Forensics Essentials

Windows File System Basics

Fundamental Forensic Methodology

Evidence Acquisition Tools and Techniques

Law Enforcement Bag and Tag

Evidence Integrity

Presentation and Reporting of Evidence and Analysis

Windows XP, VISTA, and Windows 7 Investigation and Analysis

Windows In-Depth Registry Forensics

Tracking User Activity

USB Device Tracking and Analysis

Memory, Pagefile, and Unallocated Artifact Carving

Facebook, Gmail, Hotmail, Yahoo Chat, and Webmail Analysis

E-mail Forensics (Host, Server, Web)

Microsoft Office Document Analysis

Windows Link File Investigation

Windows Recycle Bin Analysis

File and Picture Metadata Tracking and Examination

Prefetch Analysis

Firefox and Internet Explorer Browser Forensics

InPrivate Browsing Recover

Deleted File Recovery

String Searching and Data Carving

Fully Updated to include full Windows 7 and Server 2008 Examinations

Examine cases involving Windows XP, VISTA, and Windows 7

To learn more about the GCFE and the SANS Institute, visit their web site at http://computer-forensics.sans.org/certification/gcfe.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749996500011X

Data Quality Service Level Agreements

David Loshin, in The Practitioner's Guide to Data Quality Improvement, 2011

13.7 Data Quality Incident Reporting and Tracking

Supporting the enforcement of the DQ SLA requires a set of management processes for the reporting and tracking of data quality issues and corresponding activities. This can be facilitated via a system used to log and track data quality issues. By more formally requiring evaluation and initial diagnosis of emergent data events, encouraging data quality issue tracking helps staff members be more effective at problem identification and, consequently, at problem resolution.

Aside from improving the data quality management process, issue and incident tracking can also provide performance reporting including mean-time-to-resolve issues, frequency of occurrence of issues, types of issues, sources of issues, and common approaches for correcting or eliminating problems. A good issues tracking system will eventually become a reference source of current and historic issues, their statuses, and any factors that may need the actions of others not directly involved in the resolution of the issue.

Conveniently, many organizations already have some framework in place for incident reporting, tracking, and management, so the transition to instituting data quality issues tracking focuses less on tool acquisition, and more on integrating the concepts around the “families” of data issues into the incident hierarchies and training staff to recognize when data issues appear and how they are to be classified, logged, and tracked. The steps in this transition will involve addressing some or all of these directives:

1.

Standardize data quality issues and activities: Understanding that there may be many processes, applications, underlying systems, and so on, that “touch” the data, the terms used to describe data issues may vary across lines of business. To gain a consistent and integrated view of organizational data quality, it is valuable to standardize the concepts used. Doing so will simplify reporting, making it easier to measure the volume of issues and activities, identify patterns and interdependencies between systems and participants, and ultimately to report on the overall impact of data quality activities.

2.

Provide an assignment process for data issues: Resolving data quality issues requires a well-defined process that ensures that issues are assigned to the individual or group best suited to efficiently diagnosing and resolving the issue, as well as ensure proper knowledge transfer to new or inexperienced staff.

3.

Manage issue escalation procedures: Data quality issue handling requires a well-defined system of escalation based on the impact, duration, or urgency of an issue, and this sequence of escalation will be specified within the DQ SLA. Assignment of an issue to a staff member starts the clock ticking, with the expectation that the problem will be resolved as directed by the DQ SLA. The issues tracking system will enforce escalation procedures to ensure that issues are handled efficiently, as well as prevent issues from exceeding response performance measures.

4.

Document accountability for data quality issues: Accountability is critical to the governance protocols overseeing data quality control, and as issues are assigned to some number of individuals, groups, departments, or organizations, the tracking process should specify and document the ultimate issue accountability to prevent issues from dropping through the cracks.

5.

Manage data quality resolution workflow: The DQ SLA essentially specifies objectives for oversight, control, and resolution, all of which defines a collection of operational workflows. Many issue tracking systems not only provide persistent logging of incidents and their description, they also support workflow management to track how issues are researched and resolved. Making repeatable and efficient workflow processes part of the issues tracking system helps standardize data quality activities throughout the organization.

6.

Capture data quality performance metrics: Because the DQ SLA specifies performance criteria, it is reasonable to expect that the issue tracking system will collect performance data relating to issue resolution, work assignments, volume of issues, frequency of occurrence, as well as the time to respond, diagnose, plan a solution, and resolve issues. These metrics can provide valuable insights into the effectiveness of current workflow and systems and resource utilization, and are important management data points that can drive continuous operational improvement for data quality control.

Implementing a data quality issues tracking system provides a number of benefits. First, information and knowledge sharing can improve performance and reduce duplication of effort. Furthermore, an analysis of all the issues will permit DQ staff to determine if repetitive patterns are occurring, their frequency and impact, and potentially the source of the issue. Tracking issues from data transmission to customer support problem reporting will ensure that a full life cycle view of the data and issues are identified and recorded. And lastly, since we know that issues identified and being resolved upstream of the data life cycle may have critical consequences downstream, employing a tracking system essentially trains people to recognize data issues early in the information flows as a general practice that supports their day-to-day operations.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123737175000130

Inspection, Monitoring, Auditing, and Tracking

David Loshin, in The Practitioner's Guide to Data Quality Improvement, 2011

17.5 Incident Reporting, Notifications, and Issue Management

Growing awareness in the organization that data quality is managed proactively will also lead to modified expectations associated with identified errors. In organizations with a low level of data quality maturity, individuals might not be able to differentiate between a system error that resulted in incorrect business process results (like those associated with programming errors) as opposed to data errors (such as inconsistencies with data inputs). It is the data quality practitioner's role to educate the staff in this distinction, as well as transitioning the organization to properly respond to both emergent errors and remediation (as described in chapter 12).

Conveniently, many organizations already have some framework in place for incident reporting, tracking, and management, so the transition to instituting data quality issues tracking focuses less on tool acquisition, and more on integrating the concepts around the “families” of data issues into the incident hierarchies and training staff to recognize when data issues appear and how they are to be classified, logged, and tracked. The steps in this transition involve addressing some or all of these directives:

Standardize data quality issues and activities, which may be organized around the identified dimensions of data quality.

Provide an assignment process for data issues, which will be based on the data governance framework from the organizational standpoint, and on the DQ SLAs from an operational standpoint.

Manage issue escalation procedures, which should be explicit in the DQ SLAs.

Document accountability for data quality issues, in relation to the data stewards assigned to the flawed data sets.

Manage data quality resolution, also part of operational data governance.

Capture data quality performance metrics.

This last bullet item is of particular interest, because aside from improving the data quality management process, issue and incident reporting and management can also provide performance reporting, including mean-time-to-resolve issues, frequency of occurrence of issues, types of issues, sources of issues, and common approaches for correcting or eliminating problems. A good issues tracking system will eventually become a reference source of current and historic issues, their statuses, and any factors that may need the actions of others not directly involved in the resolution of the issue.

17.5.1 Incident Reporting

There are essentially two paths for reporting data quality incidents. The first takes place when an automated inspection of a defined data quality control fails. Alerts are generated when inspection shows that control indicates missed objectives. The second occurs when an individual recognizes that unexpected error has impacted one or more business processes and will manually report the incident. In either case, reporting the incident triggers a process to log the event and capture characteristic information about the incident, including:

A description of the error,

Where and how it manifested itself,

The name of the individual or automated process reporting the incident,

The measure or data quality rule that failed as well as associated scores,

The type of error (e.g., data quality dimension),

The list of stakeholders notified,

The time the incident was reported,

The resolution time as specified in the data quality service level agreement, and

The name of the data steward with initial responsibility for remediation.

At some point, the data steward will start the remediation process, initially assessing impact and assigning a priority.

17.5.2 Notification and Escalation

The DQ SLA will enumerate the list of individuals to be notified for each data quality rule and each type of violation. Once an incident is reported, the DQ SLA will indicate the expectations regarding resolution of the issue in terms of completeness and in terms of the time frame in which the issues is to be resolved. The resolution process may involve multiple steps, and each time one of the associated tasks is completed, the record in the incident management systems needs to be updated to reflect the changed status.

If one of those steps does not occur within the defined time frame, it suggests that the data steward may require additional assistance (or perhaps “motivation”) from superiors in that steward's management chain. An escalation strategy (as depicted in Figure 17.2) provides a sequential list of more senior managers within an organization notified when the level of service defined in the DQ SLA is not met.

The first forensics vendor to develop a remote acquisition and analysis tool

Figure 17.2. Progressive escalation to more senior managers as issues remain unresolved.

Of course, both the level of attention and tolerance decrease as one marches up the management chain, so it is in the best interests of the data governance team to resolve critical issues within the directed time frames.

17.5.3 Tracking

One last aspect of the incident management process is to maintain tabs on the status of outstanding data quality issues. In chapter 12 we reviewed a process for determining criticality of data quality issues. Tracking allows for the data governance team to review issues that have not yet been resolved, ordered by their prioritized level of criticality. This provides an opportunity to review the assignment of priority, decide to either increase or decrease the dedicated resources assigned to each issue, and perhaps change the assigned priority.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123737175000178

State Water Planning Model

Donald W. Boyd, in Systems Analysis and Modeling, 2001

11.4.1 Knowledge Acquisition via Inference Matrix

Primary data were not available for the groundwater subsystem and may possibly never become available. However, the knowledge and experience of domain experts provided testable substitutes for primary data by exploiting system relationships. An inference matrix was used as a knowledge acquisition tool to infer possible physical and/or statistical correlationships between dependent (in this instance, endogenous) variables and independent variables. However, circumscription confined selection of homogeneous independent variables for each of the five regression equations to variables from the Xj column of the vector table. Table 11.2 displays postulated regression relationships based on interviews with hydrologists from the Civil Engineering Department of Montana State University and the Montana Department of Natural Resources and Conservation (DNRC).

Table 11.2. Yellowstone River Basin Inference Matrix

EndoX 1X 2X 3X 4X 5X 6X 7X 8X 9X 10X 11X 12X 13KX 7+S+W+W+W+W+Wundefined+W−M−M+S−S+MYX 8+W+Sundefinedundefined+M+W+Mundefined−W−W+M−M+WYX 11+S+S+M+M+S+M+S+Mundefinedundefinedundefined+WundefinedundefinedX 12+M+Mundefinedundefined+W+M+M+W−S−S+WundefinedundefinedYX 13−S−S−M−M−S+1−W−W−W−W−Mundefinedundefinedundefined

Coded elements of the matrix infer the existence of correlation, either positive or negative, but make no distinction between physical and statistical correlations. Strength of correlation between each dependent variable and each possible independent Xj was coded “S,” “M,” or “W,” indicating strong, moderate, or weak correlation, respectively. The impact of independent variables omitted from three of the regression equations by circumscription was acknowledged by including a vertical axis intercept, the constant form K, as indicated by “Y” for yes in the K column.

Although use of the inference matrix in Table 11.2 provided excellent focus of attention, several sessions were required with the domain experts. Selected correlations are bolded, thus providing explicit regression relationships for Equations 2, 3, 5, 6, and 7. A bolded “1” appears in the X 6 column for X 13 (Equation 7) because X 13 = X 6 − explained surface losses.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780121218515500113

Memory Analysis

Jaron Bradley, in OS X Incident Response, 2016

Memory Acquisition

Acquiring memory can be done very easily on OS X in just a few simple steps using OSXPmem. This tool can also be downloaded from the rekall github pages at https://github.com/google/rekall/releases under the pmem memory acquisition tools section. At the time of writing osxpmem_2.0.1 was the latest.

After downloading OSXPmem the first step is to unzip it.

The first forensics vendor to develop a remote acquisition and analysis tool

The first forensics vendor to develop a remote acquisition and analysis tool

This creates a new app called osxpmem.app. Inside this app are the Mach-O binary used to dump memory and the KEXT bundle which needs to be loaded. However, in order to load a KEXT, it must belong to the root user and the wheel group. This shouldn’t be a problem for us since our incident response collection script should already be running as root.

The first forensics vendor to develop a remote acquisition and analysis tool

Remember that a KEXT bundle is just an organized directory. chown -R root:wheel will apply our new permissions to the .kext as well as the contents within it. Then we load the KEXT file using kextload.

After loading this KEXT you can find two new device file types located at /dev/pmem and /dev/pmem_info. The /dev/pmem now contains raw memory and the /dev/pmem_info contains all sorts of good information about the system that has been collected by the MacPmem.kext.

You can take a look at the different available arguments using osxpmem –h.

The first forensics vendor to develop a remote acquisition and analysis tool

The first forensics vendor to develop a remote acquisition and analysis tool

The first forensics vendor to develop a remote acquisition and analysis tool

We can now dump raw memory from /dev/pmem using osxpmem like so.

The first forensics vendor to develop a remote acquisition and analysis tool

where -o lets us specify the name of the memory dump. You’ll notice we’ve used the file extension of aff4. This extension stands for advanced forensic file format and is a format managed by Google. This file format is built on top of the zip file format. Some versions of “zip” can even look inside the archive. However, it’s easiest to look and manage the contents by using osxpmem itself. After we’ve dumped memory to this aff4 file we can view the contents of it using the -V.

The first forensics vendor to develop a remote acquisition and analysis tool

This lists all sorts of data about the contents of the aff4. This shows we’ve added one artifact to this archive. We can see that this archive file has unique identifier ID that’s been created for it - a6edf0bf-ff79-4267-82ec-ba01ee64258f. This id is called an aff4 URN. Any additional items added to this archive will be tagged with this same identification number. osxPmem has also automatically assigned this artifact the category of “memory:physical”. It shows that upon creating the memory artifact, compression was automatically applied resulting in a file much smaller than the actual memory dump. As seen in the help display, we can use osxpmem to add more artifacts to this archive. Adding files is as easy as using the -i argument. We can now add the swapfile artifacts (if they’re not encrypted) using this argument.

The first forensics vendor to develop a remote acquisition and analysis tool

We can see that this has added two new files to memory.aff4. Let’s take a look.

The first forensics vendor to develop a remote acquisition and analysis tool

Sure enough, we see that our two new swapfiles (swapfile0, swapfile1) have been added to our aff4 archive. If we wanted to, we could even have used osxpmem to collect all of the artifacts mentioned in this book.

Once the aff4 file has been moved to our analysis machine we can easily extract these artifacts using the --export switch. To extract the physical memory dump I would use the following command:

The first forensics vendor to develop a remote acquisition and analysis tool

The aforementioned command shows us exporting /dev/pmem from the aff4 archive to a new file called memory.dmp. This is the raw memory dump that we will be performing further analysis on.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128044568000078

Data Requirements Analysis

David Loshin, in Business Intelligence (Second Edition), 2013

Introduction

When the key stakeholders in the organization have agreed to pursue a business intelligence (BI), reporting, and analytics strategy, it is difficult to resist the urge to immediately begin evaluating tools for the purposes of acquisition and deployment. The process of acquiring BI tools acquisition is well-defined, with clearly-defined goals, tasks, and measurable outcomes. The problem, though, is that while a successful acquisition process gives the appearance of progress, when you are done, all you have is a set of tools. Without the data to analyze, your reporting and analysis still has to wait.

So how do you determine what data sets are going to be subjected to analysis? Since the BI methodology advocated in this book concentrates on the business expectations for results, our motivation for determining the data requirements should be based on the analyses to be performed. Performance measures will be calculated using the information that reflects the way the business is being run. These data sets are typically those used to capture the operational/transactional aspects of the different business processes. While some organizations already use these data sets to establish baseline measures of performance, there may be additional inputs and influencers that impact the ability to get visibility into opportunities for improvement, such as geographic data, demographic data, as well as a multitude of additional data sets (draw from internal data sources as well as external data sources) as well as other data feeds and streams that may add value.

Identifying the data requirements for your business analytics needs will guide the BI process beyond the selection of end-user tools. Since data acquisition, transformation, alignment, and delivery all factor into the ability to provide actionable knowledge, we must recognize that data selection, acquisition, and integration are as important (if not more important) as acquiring tools in developing the business analytics capability. In this chapter we will consider the business uses of information to frame the process for engaging the downstream data consumers and solicit their input. Their needs for measures to help improve performance will guide the definition of data requirements as well as specify the suitability criteria for selection of data sources.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123858894000077

Acquisition of Commonsense Knowledge

Erik T. Mueller, in Commonsense Reasoning (Second Edition), 2015

Rapid Knowledge Formation tools

In 1999, the Rapid Knowledge Formation (RKF) program was started by the Defense Advanced Research Projects Agency (DARPA). Its goal was to develop technologies allowing subject matter experts to enter knowledge directly into knowledge bases instead of requiring knowledge engineers to enter knowledge. A set of acquisition tools were developed for Cyc as part of this project. Tools are provided for

identifying microtheories relevant to a topic

listing the similarities and differences between two concepts

deciding where to place a new concept in the hierarchy

finding concepts related to a concept

suggesting relations between two concepts

defining a concept using a natural language phrase

defining a fact using a natural language sentence

defining a concept by modifying the definition of a similar concept

defining IF-THEN rules

defining a script, its participants, its events, and the ordering of its events

Feedback to the ontological engineer is provided by knowledge entry facilitation (KE facilitation) rules. Four types of feedback are provided: requirements, strong suggestions, weak suggestions, and neighbor suggestions. For example:

Requirement: An arity must be defined for a relation.

Strong suggestion: A state change event type should have a type defined for the object of the state change.

Weak suggestion: A duration should be defined for an event type.

Neighbor suggestion: Something that is true of one concept may also be true of a similar concept.

Several other acquisition tools have been developed for Cyc. The Factivore tool allows entry of facts such as properties of objects. The Predicate Populator allows entry of relations between concepts, such as the ingredients of a salad. The Abductive Sentence Suggestor uses machine learning on facts stored in the knowledge base to suggest new rules. Cyc also runs machine learning at night to identify inconsistencies and possible omissions in the knowledge base.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012801416500019X

Modernization Technologies and Services

William Ulrich, in Information Systems Transformation, 2010

Technology Capability/Needs Alignment

When looking at modernization technology options, it is important to define your needs within a broad-based view of enterprise requirements. Too many times we find tools have been deployed in companies and very few people are aware of their existence. Companies end up buying similar tools when they already own a tool that meets their needs.

This stems from project-driven tool acquisition and deployment, which involves one area justifying and using that tool on a single project. If the project is canceled, the tool is shelved — even though many other areas of the enterprise could use the tool. In other cases, a tool has one user who leaves the organization and the tool is likewise shelved. This happens all too frequently and costs an organization in lost opportunities and real dollars.

One last needs-alignment issue involves what the project requires versus what the tool can actually do in contrast to how a vendor promoted the product. Many times, companies license a tool with one idea in mind but the vendor pushes the tool's canned solutions onto the project team and as a result project goals and customer plans are derailed. This happened at one company that licensed a product to streamline and improve their existing COBOL environment. Yet once the vendor came in and helped install the tool, the only thing they ended up doing was trying to eliminate dead code and reduce a complexity metric that did not need lowering. As a result, the tool delivered suboptimal value to the system, to the project, and to the organization.

The bottom line is that organizations must stay focused on their real needs and not let a tool or a vendor derail their plans.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123749130000020

Case Processing

David Watson, Andrew Jones, in Digital Forensics Processing and Procedures, 2013

9.9.1.4.2 Acquiring a Tablet Computer

1.

The tablet computer to be imaged is connected to the acquisition machine via the write blocker to prevent evidence contamination.

2.

Tablet computer acquisition may require specialist software or tools and must only be undertaken by Forensic Analysts who are competent on the specialist software or tool being used.

3.

The acquisition software is used, according to the manufacturer’s recommended procedures to acquire a forensic image of the tablet computer. The Forensic Laboratory uses forensic software for acquiring and processing images from all tablet computers that integrate seamlessly with its main case processing software.

4.

Once the forensic acquisition has taken place using the chosen forensic imaging tool, the MD5 hashes of the acquisition and the verification should be checked to ensure that they are the same. If they are, then the bit image is exact; if they do not, then the copy is not exact and must be redone until the hashes match.

5.

Experience shows that a different version of the same acquisition tool or a different acquisition tool may solve the problem.

6.

Once one complete image has been taken and the hashes match a second one must be taken, this may use the same tool or use a different one. The reason for this is that if one image corrupts, there is a fall back. This process may require a second dedicated case disk for the image.

7.

Once two complete and exact images have been made, the original media should be returned to the Forensic Laboratory Secure Store and signed back in after being resealed as defined in Section 9.6.1.

8.

Details of the acquisition process shall be recorded on the Cell Phone Details Log as given in Appendix 23.

9.

All work carried out must be recorded in the Case Work Log as defined in Appendix 9.

Note 1

To examine a tablet computer, it may be necessary to turn it on and this will make changes to it as it boots up. This breaches the ACPO Principle 1, and so Principle 2 must be relied on. The means that the requirements for competence and being able to prove it, as required in Principle 2 are even more important, as is the requirement for the audit trail in Principle 3.

Note 2

Some tablet computer will implement screen locking after a set period and require entry of the pass code to access the phone. Some commercials software can bypass this, but if this is not possible, then consideration should be given to making changes to the settings on the phone to keep the pass code protection from activating. This breaches Principle 1 as in Note 1 above, so the same proviso must be made.

Note 3

The Forensic Analyst (or First Responder) should avoid touching the screen as much as possible, as this may activate the tablet computer and make changes to it.

Note 4

It may be possible to disable the account with the service provider.

Note 5

The Forensic Analyst must be aware of the possibility of the tablet computer being booby trapped or contain malware (e.g., Trojans to wipe the disk) and have a contingency plan in place if needed.

Which agency introduced training on software for forensics investigations by the early 1990s?

By the early 1990s, specialized tools for computer forensics were available. The International Association of Computer Investigative Specialists (IACIS) introduced training on software for forensics investigations, and the IRS created search-warrant programs.

What is the most common and flexible data acquisition method?

Bit-stream disk-to-image files. This is the most common data acquisition method in the event of a cybercrime. It involves cloning a disk drive, which allows for the complete preservation of all necessary evidence. Programs used to create bit-stream disk-to-image files include FTK, SMART, and ProDiscover, among others.

Does Windows have built in hashing algorithm tools for digital forensics?

Similar to Linux, Windows also has built-in hashing algorithm tools for digital forensics. Some acquisition tools don't copy data in the host protected area (HPA) of a disk drive.

What are the five major function categories of any digital forensics tool?

Five major categories:.
Acquisition..
Validation and verification..
Extraction..
Reconstruction..
Reporting..