Protection. Accompaniment. Security.

  • Multi-factor authentication
  • Risk-based authentication

Multi-factor authentication, or MFA, is a security system that requires various means of identification. These are based on categories of independent credentials to ensure the verification of the user's connection or any form of transaction. For this, at least two independent credentials will be used: the password, the security token and the biometric verification.

The MFA's goal is to set up defences at different levels and make it more difficult for unauthorised persons to gain access to targets (such as physical locations, computer equipment, networks or databases). If a factor is damaged or destroyed, the attacker must still overcome at least one obstacle before successfully entering the target. A few years ago, MFA systems were generally based on two-factor authentication. Today, vendors are increasingly using "multi-factor" tags to refer to any authentication system requiring multiple identities.

Risk-based authentication is a dynamic system that takes into account the user agent's configuration file. The latter requests access to the system to know the identity of the user (IP address, access time, HTTP header of the user agent, etc.) associated with this transaction. Second, the risk profile is used to assess the complexity of the challenge. High risk profiles pose greater challenges, while static usernames/passwords may be sufficient for low risk profiles. Risk-based implementation only allows the application to ask users to provide additional credentials when the level of risk is appropriate.

Machine authentication is recurring in a risk-based implementation. Machine authentication is performed in the background, and if the computer is unrecognizable, only the client is prompted to perform another authentication. In a risk-based authentication system, the organization decides whether another identity verification is required.

Strong authentication will be triggered if the risk is considered potential, such as by a one-time password transmitted through out-of-band communication.

When customers perform certain high-risk transactions (such as remittances or address changes), they can also use risk-based authentication to request additional authentication during the session.

Risk-based authentication is a very advantageous solution for clients, as additional steps are required only in certain unusual circumstances (such as attempting to log on from a new computer).

The key is to improve the accuracy of user authentication without disrupting users. Large groups/enterprises use risk-based authentication.

  • Web application firewall
  • Next Generation Firewall

Web application firewall is a special type of application firewall, specifically for web applications. It is deployed in front of web applications and scans two-way web communications (HTTP) - detecting and blocking any malware. OWASP provides a broad technical definition for WAF, i.e.: "From a technical point of view, it does not rely on the application itself as a security solution at the web application level".

According to the required PCI DSS 6.6 information supplement, WAF is defined as the "point of application of the security policy located between the web application and the client point". This function can be implemented in software or hardware, executed in a typical device or server running a general operating system. It can be a stand-alone device or integrated with other network components.

In other words, WAF can be a virtual device or a physical device that can prevent the exploitation of web application vulnerabilities by external threats. These vulnerabilities may be caused by the application itself being an older version or insufficient coding in the design process. WAF corrects these code flaws via a special ruleset configuration also called strategy.

WAF is not the ultimate security solution, but is designed to be used in conjunction with other network perimeter security solutions (such as network firewalls and intrusion prevention systems) to provide global strategic defence.

As the SANS Institute points out, WAF generally follows a positive security model, a negative security model or a combination of both. WAF uses a combination of logic, analysis and rule-based signatures to detect and prevent attacks such as cross-site scripting and SQL injection. OWASP lists the top ten web application security vulnerabilities. All WAF commercial offerings cover at least these ten vulnerabilities. Non-commercial options are also available.

As mentioned above, the well-known open source WAF engine ModSecurity is such a choice. The WAF engine alone is not enough to provide adequate protection, so OWASP and Trustwave Spiderlabs help organise and maintain a set of basic rules for use with the ModSecurity WAF engine via GitHub.

Firewalls are divided into two categories: network-based systems and host-based systems. The network-based firewall can be placed anywhere in the local or wide area network. It can be a software device running on general purpose hardware, a hardware device running on dedicated hardware, or a virtual device running on a virtual host controlled by a hypervisor. The firewall device may also provide other functions in addition to the firewall, such as DHCP or VPN service.

According to Gartner's definition, the Next Generation Firewall (NGFW) is a "deep packet inspection firewall, which can not only check and block ports/protocols, but also add application-level inspection, intrusion prevention and provide information outside the firewall".

Next Generation Firewall (NGFW) is part of the third generation firewall technology, which combines traditional firewalls with other filtering functions of network equipment, such as application firewalls that use Deep Packet Inspection (DPI) and Intrusion Prevention Systems (IPS).

Other technologies can also be used, such as TLS / SSL encrypted traffic inspection, website filtering, QoS and bandwidth management, antivirus inspection and web integration, third party identity management (i.e. LDAP, RADIUS, Active Directory).

NGFW must be able to identify users and groups and enforce identity-based security policies. Whenever possible, this should be achieved through direct integration with existing enterprise authentication systems (such as Active Directory) without custom server-side software. This allows administrators to create more granular policies.

  • Anti-Ransomware Solutions
  • Anti-Exploit Solutions

Ransomware, commonly known as CryptoLocker, CryptoDefense or CryptoWall, is a type of malware that restricts or even prevents users from fully using their computers. It usually locks the computer screen or encrypts files.

The new type of Crypto Ransomware requires users to pay a certain amount to obtain the unlocking key.

Today's ransomware software families have their origins in the beginnings of fake antivirus software, then come the variants of lockers, then come the variants of encryption files which today represent the majority of ransomware software.

Each type of malware has a common goal, which is to extort money from victims through social engineering and intimidation. Each time, the ransoms demanded are higher and higher.  

To guarantee a higher level of security, anti-exploitation programmes block the techniques used by the attackers.

These solutions can protect you from Flash attacks and browser flaws, and even prevent undiscovered or uncorrected retries.

The "chain of elimination" of exploits consists of several stages. Web exploits often use download attacks such as drive-by downloads. The infection begins when the victim visits an infected website that is infected with malicious JavaScript code.

After several checks, the victim was finally redirected to the home page via Flash, Silverlight, Java or Web browser vulnerabilities. However, for vulnerabilities in Microsoft Office or Adobe Reader, the initial infection vector may be phishing e-mails or malicious attachments.

After the initial delivery phase, the attacker uses one or more software vulnerabilities to control the process execution flow and then enters the development phase. Due to the security measures built into the operating system, it is usually impossible to directly execute arbitrary code, so attackers must first bypass them.

Successful exploitation allows shellcode execution, in which the attacker's arbitrary code begins to execute, which ultimately leads to the execution of the payload. The payload can be downloaded as a file or even loaded and executed directly from system memory.

Regardless of how the initial steps are carried out, the attacker's ultimate goal is to initiate malicious activity. Starting another application or thread can be very suspicious, especially if you know that the application in question does not have this functionality. Anti-intrusion technology monitors these operations, suspends the application's execution flow, and applies further analysis to verify whether the attempted operation is legitimate.

Program activity (memory changes in a specific memory area and the source of the code start attempt) that occurred before the suspicious code was started is used to identify whether the user took any action.

In addition, the EP has implemented numerous security measures to deal with most of the attack techniques used in the exploits, including Dll hijacking, reflective Dll injection, heap spraying allocation, stack perspective, etc.

These other behavioural indicators provided by the execution tracking mechanism of the behaviour detection component allow the technology to safely block the execution of the payload.

  • Network Performance Monitoring & Diagnostics
  • Penetration testing

Network performance monitoring and diagnostic tools help IT and network operations teams understand the ongoing behaviour of the network and its components in response to traffic demands and network usage. Measuring and reporting on network performance is essential to ensure that performance remains at a tolerable level. Customers in this market are looking for identification tools to detect application problems, identify root causes and plan capacity.

The use of network monitoring software and network monitoring tools can simplify and automate the network monitoring and management process.

A network monitoring system is essential to resolve bottlenecks and network performance issues that can have a negative impact on network performance.

With the rapid development of enterprise network monitoring and remote network monitoring, various network monitoring equipment and solutions are available on the market. An effective network management system will include an integrated network monitoring tool that can help administrators reduce staff and automate basic troubleshooting techniques.

Functions of effective network monitoring software :

-Visualize the entire IT infrastructure and have other classifications based on type or logical group.

-Use predefined templates to automatically configure devices and interfaces.

-Monitor and troubleshoot network, server and application performance.

-Implement advanced network performance monitoring technology to quickly resolve failures by identifying the root cause of the problem.

-Take advantage of advanced reporting capabilities, which can schedule and automatically send or publish reports via email.

Network monitoring has become an important aspect of the management of any IT infrastructure. Similarly, network assessment is considered the basic step in aligning your IT infrastructure with business objectives, which is achieved by network monitoring applications.

Penetration testing (generally referred to as a pen test, pentest or ethical hacking) is a simulated network attack allowed on a computer system to assess the security of the system. Not to be confused with vulnerability assessment.

Conduct tests to identify weaknesses (also referred to as vulnerabilities), including the possibility and benefits for unauthorized parties to access system functions and data, so that the risks of the system can be fully assessed.

This process usually identifies the target system and specific objectives, then verifies the available information and uses various methods to achieve this objective. Penetration test targets can be white boxes (providing context and system information) or black boxes (providing only basic information or no information other than the company name).

Grey box penetration tests are a combination of the two (the verifier shares limited knowledge of the target). Penetration tests can help to determine whether the system is vulnerable to attack if the defences are adequate, and which defences (if any) failed the test.

Any security problems revealed by the intrusion tests must be notified to the owner of the system. The penetration test report can also assess the possible impact on the organisation and propose recommendations to mitigate the risks.

  • Endpoint detection and response
  • ISP Link Balancer
  • The Application Delivery Controller

The market for Endpoint Detection and Response (EDR) solutions is defined as: recording and storing endpoint behaviour at the system level, using various data analysis techniques to detect suspicious system behaviour, providing contextual information, preventing malicious activity, and providing recommendations for restoring affected systems.

EDR solutions must provide the below essential features:

• Responding to threats in real-time
• Increasing visibility and transparency of user data
• Detecting stored endpoint events and malware injections
• Creating blacklists and whitelist
• Integration with other technologies

Internet load balancing refers to the method of distributing Internet traffic over two or more Internet connections to achieve two results:

  1. Better Internet and end-user performance, and
  2. Improve the reliability of the Internet connection.


In most cases, the pleasant side effect of Internet load balancing is the reduction of operating expenses for Internet services. In most cases, the Internet connection achieved through load balancing is more cost effective than a link to a single ISP with similar bandwidth capacity - in most cases, a single provider cannot provide such a link, so Internet load balancing is the only possible method of achieving the required speed and reliability options.

The principle of load balancing remains the same in any environment, although the situation and implementation are different. Internet Service Providers use balancing strategies to manage the evolution of incoming Internet traffic, and cloud load balancing has its own particular aspects.

The problem of load balancing multiple ISP connections can be solved very simply by using the GUI options in many commercially available devices.

The reasons for ISP load balancing are different. One ISP can be considered more efficient or cheaper than the other.

The Application Delivery Controller (ADC) is a computer network device in the datacenter, usually integrated with the Application Delivery Network (ADN) and can perform common tasks such as those performed by IT organisations. The web server itself. Many also provide load balancing. The ADN is usually placed in the DMZ, between the external firewall or router and the web farm.

A common misconception is that the Application Delivery Controller (ADC) is an advanced load balancer. This description is not accurate. ADC is a network device that helps applications direct user traffic to eliminate excessive load on two or more servers. In fact, ADC includes many 3- to 7-layer OSI services, including load balancing.

Other features common to most ADCs are IP traffic optimisation, channel/direction of traffic, SSL offload, web application firewall, CGNAT, DNS and proxy/reverse proxy, to name a few. They also tend to provide more advanced features such as content redirection and server status monitoring.

Cisco Systems offered application delivery controllers until its withdrawal from the market in 2012. Market leaders such as F5 Networks, Citrix, KEMP, Radware, etc. had managed to expand in the market thanks to Cisco in previous years.

Privileged Access Management (PAM)

ETHIC IT offers you an innovative and essential solution, developed by our partner SSH.

Principles of PAM solution

A PAM (Privileged Access Management) solution, allows you to manage your privileged accounts in a transparent and secure way.

The goal of the solution is:
- To have a single secure access point : No need to use bouncing machines
Manage access A user, administrator or external profile can only see and access what it needs.
- To ensure a traceability Actions and connections are logged and sessions can be video recorded and archived
Warn of certain actions Detect and block certain actions according to defined rules (example: prevent a reboot command)

Simple and transparent operation

The user interface is simple and aggregates all your environments on a web window.
The Bastion solution allows you to assign roles to users with specific access rights for each role for all your environments. Thus, on a single interface, you can manage the access of your users to all your machines and trace all operations.
The user only has visibility into the environments that are accessible to them, and the administrator can manage access and review records from the same interface.

Ethic IT accompanies you in the implementation of the Bastion PrivX by SSH solution.