LogStack is a central log management and SIEM service that includes software installation, configuration and administration.

LogStack has been built upon the idea that customers pay for knowledge and experience.

As part of the LogStack service, we install the necessary software packages (for the central management of logs) on the client’s server, configure them to work in the client’s environment, and ensure the ongoing management and maintenance of the solution. The LogStack service is suitable for use on client’s virtual or physical servers, on-prem or cloud resources. Since the solution is mainly based on software distributed under free licenses, it does not come with the “golden handcuffs” and cost, related to licenses and future maintenance costs.

LogStack provides significant added value from a cyber, IT, and data protection perspective (including: applications, development, or other daily IT management). LogStack helps with ISO or E-ITS standards compliance.

WITH THE LOGSTACK SERVICE, YOU GET WITHIN 1-3 MONTHS:

  • High-availability central log management system;
  • Configuration of basic functionalities:
    • A. Notifications
    • B. Reports
    • C. Views
  • Implementation of standard and other commonly used log sources;
    LogStack user training after implementation

Administrative challenges

Mul puudub ülevaade logide asukohast.
I am unaware of the location of different logs in our organization.
Oleme kulutanud oma süsteemi arendamisele üleliigset aega ja resurssi.
We have wasted an excessive amount of time and resources on our log management system, however we are still far from reaching our goal.
Kes saab logidele ligi? Mis õigused neil on?
Who has access to our logs? What are their access rights?
Kuidas ma saan kõik IT logid tsentraliseerida?
How can I centralize all IT logs?
Ma soovin logipõhjal analüüsi ...
I want to perform log-based analytics to assess past and present situation and make future predictions.
Ma soovin anomaaliate ja oluliste sündmuste häireid.
I would like to be alerted about any anomalies and critical events.
Meie tarkvara logid ei ole loetavad.
Our software logs are unreadable.
  • Log management is resource-intensive.
  • A large portion of log management projects remain unfinished or are only partially implemented.
  • The principle of “no logs, no problems” no longer applies in today’s world.
  • Dealing with logs after an incident is too late.
  • Log management is particularly important in the field of cybersecurity and data protection.
  • Threats, attacks and leak sources can be analyzed and identified through logs.

LogStack includes

Installation, management, updates and development of a central log management and SIEM environment (customer owned and managed infrastructure and servers either in physical, virtual or cloud (IaaS) DC).

  • Collected logs are normalized and indexed, allowing for rapid processing: searches, correlations, and visualizations.
  • The user interface already includes default views and dashboards for immediate analysis of different types of logs.
  • Configuration of automatic analysis of log events, SIEM (more than 600 built in rules) and notifications for anomalies and critical security events.
  • The primary user interface is web-based and allows for flexible access control management (RBAC).
  • LogStack’s architecture and installation is designed to comply with the requirements of the ISKE M-level infrastructure.
  • All LogStack’s internal and external connections are secured (authenticated and encrypted) with an internal PKI.
  • Tools and procedures are available for quick log source integration.
  • The service is based on clustered Elasticsearch and related components.

LogStack service

Installation (customer owned servers)

  • LogStack 3 server cluster solution
  • AAA module integration
  • Configuring Data Access (RBAC)
  • Design of indexes
  • Interface of standard log sources 5 pcs
  • Creating workflows and dashboards 3 pcs
  • Automated Rules and Notifications 3 pcs
  • Basic configuration of the SIEM module
  • User training

Plus Service Package

  • LogStack developments and updates
  • LogStack Technical Support
  • Interface to new standard log sources
  • Bytelife monitoring
  • Monthly review (including operating system, containers etc)

Premium Service Package

  • LogStack developments and updates
  • LogStack Technical Support
  • Interfaces for all log sources (standard and non-standard)
  • Bytelife monitoring
  • Daily administration (including operating system, containers etc)
  • Creating dashboards and notifications
  • SIEM monthly fine-tuning
  • Quarterly trainings, workshops on request

Add on features

Terviklikkuse tagamise moodul Integrity Integrity module
SIEMi peenhäälestus SIEM fine-tuning
Koolitused ja töötoad Trainings and workshops
Dual active HA Disaster recovery Dual active HA Disaster recovery

LogStack architecture

LogStack modules

Analysis module and SIEM

The analysis module allows the analyst, system administrator, and information security manager to visually process the collected information. Enables flexible search, filtering, grouping of information, easy creation of different visualizations of aggregation zones and combining them into dashboards. A large number of views and dashboards for the most commonly used log types are immediately available. It also includes the necessary SIEM functions, automatic analysis capabilities, threat feed integrations, creation of cases and time series. More than 600 rules, secured in more than 50 categories and using the MITER ATT&CK framework, are immediately available for automatic analysis.

Acquisition modules

are installed on the server where logs need to be collected (pre-installed on LogStack servers), usually Elastic’s filebeat or winlogbeat, which create a highly-availabilie secure TLS connection with LogStack, typically with the Reception Module logstash and Storage Module elasticsearch. For collecting logs from cloud servers, pull-type transport can also be used, where central Reception Module logstash modules connect to the cloud server and pull fresh logs. Automation tools have been created for installing and configuring acquisition modules using ansible playbooks.

Reception Module

usually consists of three subsystems, each redundant, for consuming different types of logs: 1. For receiving syslog-based log streams over TCP/UDP, buffering, and preprocessing, the syslog-buffer module is used. This is often needed for network devices. 2. For receiving TLS-secured log streams from servers and local syslog buffer, Elastic’s logstash is usually used. Here, information is normalized, transformed, and enriched as needed. 3. An independent local log flow provides a stable log channel for handling LogStack’s internal logs.

Storage Module

consists of separate Elasticsearch functions: master, data, and coordination, each scaled according to best practices and demand, for example, 3 nodes per function. Data availability is ensured with +1 redundancy, meaning the same log event exists in at least 2 different Elasticsearch service nodes running on different (virtual) servers.

AAA module

is integrated with the Storage module and provides minimal necessary access using role-based access control (RBAC), specific to each LogStack component and/or user who wishes to access it. In the default configuration, the accounts of LogStack internal modules (about 10 preconfigured roles) are kept in a local database, and for human users’ access (3 preconfigured roles), it is integrated with an existing external AAA service provider over a standard secure protocol, such as LDAPs (Microsoft AD, OpenLDAP), OpenID (like Azure AD), Kerberos, SAML.

Backup module

can be used in two roles:

  • for backing up data to a separate system and restoring it from there
  • for archiving older data and bringing the system back.

For backup and archiving, a connection to an external storage medium via S3 or NFS protocol is suitable. Backup and restore can be done with high granularity, such as filtering by index or date. The backup module works successfully even with a dense schedule (15 minutes) and a large number (1000+) of indexes and snapshots.

Installation module

helps to perform the initial installation of LogStack as quickly as possible. It includes functions (ansible playbooks) for automatic installation and configuration of server infrastructure services (GFS, Docker Swarm), as well as the above-mentioned central service components and Acquisition modules.

Alerting modules

allow for automated analysis of log events and generating user notifications to various channels such as email, Slack, etc.

The PKI module

provides x.509 certificates to all LogStack service-providing modules (e.g. Elasticsearch, Logstash, etc.). It can be used as an independent 2-level CA (rCA + iCA) or integrated into an existing PKI infrastructure as a signing sub-CA.

The integrity assurance module

signs all log entries with SHA before storage and creates a hash-based block chain-like structure (SHA2). A background process also checks the integrity of this structure and alerts in case of loss of integrity.

The synchronization module

enables linking two autonomously operating LogStacks (typically in separate data centers or clouds) so that both have all the information available at any given time (near-real-time).

The operations module

contains components and scripts that simplify the LogStack’s daily administration and troubleshooting.

Watch the videos here

Would you like to get more information about the LogStack log management environment? We have prepared several demo videos with different scenarios, how the central log management environment installed with the LogStack service helps to detect anomalies both in web servers (security incident) and MS Windows systems (hijacking and ransomware incident).

View here

LogStack use cases

Security department,

for whom a centralized log management system with access to all infrastructure and application logs is critical for the fast detection and operational analysis of critical security incidents.

IT development and management departments,

for whom providing access to logs that are specific to a particular role is important through a simple interface. For example, a Windows administrator is only interested in logs related to their Windows domain.

Compliance officer,

who can create reports and extracts based on logs to identify activities that do not comply with company policies, and to prevent data breaches.

Time-saving use case

As a telecommunications company, we need to have a good overview of what is happening in our systems. Although we have implemented a SIEM (security information and event management) system, which works well as a notification system, it is unfortunately not enough to help our departments quickly determine the root causes of problems, threats, and errors. To address this, we planned to implement a centralized log management system that would enable us to effectively reach the root cause of problems. During the implementation phase, we realized that we lacked the knowledge and experience to get the system working properly and manage it. We decided to opt for a service so that we could quickly and efficiently install a centralized log management system that would allow us to isolate problems before they become critical.

Cost-saving use case

We are a medium-sized state institution, and our resources are very limited budget-wise. To avoid constantly struggling with resource scarcity in our production environment, we decided to implement a monitoring system that gives us an overview of resource usage in the production environment. In order to save time and additional costs, we decided to use a partner’s help for implementation and ongoing system operation. Now we can see if an application is consuming excessive underlying infrastructure resources due to regular or irregular operation. With this evidence, we can turn to our partners, who in turn can improve developments according to our requirements.

Compliance use case

Central log management provides our hospital’s IT, information security, and business departments with views and reporting capabilities that we need to demonstrate our compliance with internal controls and SLAs. Regardless of whether a security incident occurs or not, we are increasingly under surveillance. Existing and new laws impose increasing audit requirements, and summarizing large amounts of information without a central system is almost impossible. In addition to the critical role of log file analysis in cybersecurity, it helps us control audit requirements, litigation, and personal data handling. To avoid replacing one task with another, we turned to an experienced partner whose service gives us the opportunity to focus on our own tasks.

Security use case

As an educational institution, we have a large number of users in our system and managing them is very cumbersome. Therefore, it is important for us to centralize as many service management systems as we can. Centralized log management allows us to investigate and audit cases more efficiently, as all event data is collected in one place. Malicious actors have a harder time removing evidence from logs that have already been sent to the log server. In addition, the integrity assurance module verifies the integrity of the log structure and alerts us in case of integrity loss. We can analyze and correlate data across multiple systems. Event data can be accessed even when the original server is offline, compromised, or decommissioned. We outsourced installation and management to a partner to avoid adding another service management task to our list of duties.

Information Security use case

As a transport company, we want to focus on our core business and use multiple partners for other areas, including log management. Although we know exactly where our log files are located, it is not best practice to allow all our partners’ employees access to our production systems beyond system startup. The best solution is centralized log management, which allows users to view log files but does not require access to production systems. With this approach, we keep our systems operational and secure.

Statistics and Marketing use case

As an online store, it is important for us to see the behavior of web application visitors and know which customer trends to follow. This information allows us to easily notice when it is the best time to send a newsletter, release a new version of the web application or launch a product, close our site for maintenance or testing, and much more. In addition, we use log analysis to observe and influence marketing activities. By collecting data such as referring sites, accessible pages, and conversion rates, we can see how well our marketing campaigns are performing and take measures to improve them if necessary.

Our customers

Additional information

Minimum infrastructure requirements

* The minimum configuration at one site consists of 3 servers that meet the following requirements: 2 CPU cores, 12GB RAM, 40GB OS disk, 400 GB SSD, 1 TB HDD, OS: CentOS 7, Alma 8.
* More detailed requirements for resources will be determined after the task is clarified and depend on the volume, complexity, retention periods, RBAC, etc. of the logs.

Contact Us

ByteLife Solutions OÜ
Phone: +372 633 3266
E-mail: info@bytelife.com
Toompuiestee 35, Tallinn 10149
Reg. Code: 11179901
VAT no: EE101003320