Application security (AppSec) is a critical discipline focused on safeguarding software applications from threats and vulnerabilities throughout their lifecycle. This manifesto outlines fundamental principles and actionable guidelines to build and maintain secure applications, moving beyond reactive measures to proactive, design-centric security. It emphasizes a holistic approach, from minimizing the attack surface to robust data handling, secure configurations, and resilient operational practices.
Alex Tatulchenkov - AppSec Leader | Enterprise Web Defender.
No code = no issues. No sinks = no vulnerabilities. No user-controlled input = no vector of attack.
-
Aggressive Code Deletion: Always delete
obsolete,dead,unreachable, andunreferencedcode. This includes features that are no longer actively used, experimental branches that never reached production, or legacy integrations. Static analysis tools like PHPStan, Phan, and Psalm can effectively detect unreachable, unreferenced, and dead code. However, obsolete code, which might still be executed but is no longer relevant, often requires dynamic analysis combined with concepts like tombstone. -
Input Minimization: Design application features to solicit the absolute minimum user input necessary for their function. Where data can be reliably generated by the system (e.g., generating a unique, secure filename for an uploaded document rather than accepting a user-provided one), prioritize system generation over user provision. This significantly limits potential manipulation vectors.
-
Least Expressive Language: When interacting with external systems or accepting user input, utilize the least expressive language or data format possible. Prefer strictly defined structured data formats (e.g., JSON Schema, Protocol Buffers) with clear validation rules over free-form text. This constrains the potential for malicious payloads to be syntactically valid within the expected input structure.
-
Application Hardening: Actively engage in application hardening practices. This includes systematically removing unnecessary components or functions, eliminating default passwords, and rigorously auditing software integrations. Access to application functionalities should be strictly restricted based on user roles and context.
The concept of "Absolute Zero" is built on several foundational tenets:
-
Unused Code Adds Complexity: Every line of code, whether actively executed or not, contributes to the overall complexity of the codebase. This increased complexity makes the system harder to understand, maintain, and, crucially, to audit for security vulnerabilities. Such code can harbor dormant vulnerabilities that might be inadvertently activated by future changes, unexpected execution paths, or environmental shifts.
-
Unused Code is Misleading: The presence of unused code can mislead developers during maintenance or refactoring. They might misinterpret its original purpose or assume it is still relevant, leading to incorrect assumptions that could introduce new bugs or security flaws.
-
Dead Code Can Come Alive: As tragically demonstrated by the Knight Capital Group incident in 2012, dormant "dead" code can be inadvertently reactivated during software deployments or system changes. This can lead to severe, unpredictable, and financially catastrophic consequences, underscoring the critical risk of not actively identifying and removing obsolete or dead code.
-
Reduced Attack Surface: Fundamentally, a smaller, leaner codebase means fewer potential entry points for attackers, fewer opportunities for logical flaws, and a significantly reduced surface area for exploitation. Hardening practices, by removing superfluous elements, directly contribute to this reduction in "threat profile" and "attack surface."
The economic imperative of code deletion extends beyond mere technical debt. The Knight Capital Group incident provides a stark example of a direct, massive financial loss ($400 million) directly attributable to "dead code coming alive." This transcends the traditional understanding of technical debt, elevating code hygiene to a clear economic risk. If unused code consumes CPU cycles, it incurs tangible operational costs (compute resources, energy). More significantly, if it leads to incidents, it incurs substantial incident response costs, severe reputational damage, and direct financial losses. Therefore, the "Absolute Zero" principle is not solely about security or code cleanliness; it is a critical operational and financial efficiency measure. Organizations should integrate code hygiene metrics and the active management of code obsolescence into their broader operational efficiency and risk management frameworks, rather than confining it solely to security audits or developer best practices. This reframes "code deletion" from a developer-centric task to a strategic business imperative, directly impacting the bottom line and overall organizational resilience.
A significant complication in the pursuit of "Absolute Zero" is the theoretical limitation posed by the Halting Problem. The Halting Problem, which posits that no general algorithm can definitively determine if an arbitrary program will halt for all possible inputs, is reducible to the problem of finding dead code. This fundamental undecidability implies that a perfect, universal dead code detector capable of identifying all instances of dead or obsolete code in any program is theoretically impossible. While static analysis tools can effectively identify some forms of dead code (e.g., code paths that are syntactically unreachable due to explicit return statements, or unreferenced variables), they cannot definitively prove the absence of all dead or obsolete code, especially in complex, dynamic systems, those with highly conditional logic, or those where code is dynamically loaded or generated. This inherent limitation necessitates a multi-faceted approach, combining static analysis with dynamic analysis, rigorous code reviews, and continuous architectural vigilance. The challenge of identifying "dead code" is evolving, particularly in modern microservices and distributed systems, where determining the "liveness" of code becomes exponentially more complex. This suggests a combination of deep architectural understanding, distributed tracing, and long-term usage analytics is required, meaning organizations must accept a certain level of residual "dead code" risk, making continuous monitoring and robust incident response even more crucial.
Security is paramount, and a cautious approach is fundamental to safety. All data must undergo context-specific escaping precisely at the boundary where it interacts with a "sink." A "sink" is defined as any program point that writes to an external resource, such as a database, file system, or user interface.
-
Escape data as close to the sink as possible.
-
Escape any data no matter if it's user-provided or system-generated.
-
Extract the raw value of a ValueObject directly before escaping.
Proper output encoding and escaping is a cornerstone for preventing common injection vulnerabilities, such as Cross-Site Scripting (XSS), SQL Injection, and Command Injection. It ensures that data is always treated as data, not as executable code, in its target context. Treating all data as potentially untrusted at the point of output ensures you don't inadvertently introduce flaws, even with internal data.
Access to computational power is a privilege, not a right. Applications should
parse, notvalidate, input data as close to thesource, quickly failing if data does not conform to expectations.
-
Parse, notvalidate, input data as close to the source as possible. -
Use always-valid
ValueObjectsto enforce invariants. -
Do not transfer malicious data from source to sink.
Traditional validation often leads to "shotgun parsing," an antipattern where input-validating code is scattered throughout the processing logic, rather than being concentrated at the input boundary. This dispersal makes it difficult to reject invalid input before it is processed, potentially leading to an unpredictable or corrupted program state if errors are discovered late. Furthermore, validation checks that return no information are fragile, easily forgotten, and can lead to runtime errors if invariants are broken elsewhere. Many injection vulnerabilities are direct consequences of shotgun parsing, where user input is not correctly validated before being used by later application code. Parsing, in contrast, stratifies the program into distinct parsing and execution phases. Invalid inputs are rejected at the boundary, preventing the program from entering a compromised state. This approach leverages compile-time guarantees by embedding validation knowledge into the type system, ensuring that subsequent functions operate on already-validated and structured data. This significantly reduces the attack surface by making it harder for malicious input to exploit unexpected states and promotes a single source of truth for data integrity.
It is imperative not to lose or forget information regarding the validity and nature of a certain input as it flows through the application.
-
Declare custom types (
ValueObjects) instead of using strings for unstructured data, explicitly encoding their inherent properties and constraints. -
Pass instances of custom types (
ValueObjects) throughout the application, from the source to the sink. -
Use raw values of
ValueObjectonly within a single, well-defined context, typically immediately before a sink or when interacting with an external system requiring a primitive type.
This approach is crucial to prevent the "shotgun parsing" problem, where validation is repeated or blindly assumed to have been done elsewhere. By embedding validity into the type system, the application's logic can operate on trusted data, significantly reducing the risk of vulnerabilities arising from inconsistent or forgotten validation. This ensures that the guarantees established during parsing are carried forward, preventing redundant validation checks throughout the codebase and explicitly maintaining data integrity.
All sources are born at the same architectural level and should be treated equally. Data originating from different sources must undergo identical processing if it represents the same logical data type.
- Apply identical parsing/validation/escaping/sanitization for the same data coming from different
sources(e.g., web form, REST API, message queue, internal file).
This prevents vulnerabilities like HTTP Parameter Pollution or HTTP Request Smuggling, where attackers exploit inconsistencies in how different tiers or components of an application parse and interpret the same input. Ensuring uniform handling across all sources eliminates potential bypasses and ensures a consistent security posture.
Grant only the minimum necessary permissions for any entity (human user, software process, API, etc.) to perform its function.
-
User Accounts: Limit user privileges to only what's required for their role.
-
Service Accounts: Restrict service accounts to the minimum permissions needed to run the application or service.
-
File Permissions: Set file and directory permissions to the most restrictive possible, read-only by default.
-
Database Access: Grant specific, fine-grained permissions for database operations (e.g.,
SELECTonly whereINSERTisn't needed). -
Network Access: Restrict network connectivity between components to only necessary ports and protocols.
-
Implement Role-Based Access Control (RBAC) to assign granular permissions based on user roles and responsibilities.
-
Ensure application processes and services run with the least privilege necessary, avoiding execution as root or administrator unless absolutely critical.
-
Design software with secure defaults, where new users or processes start with minimal privileges and only gain more as explicitly required and granted.
-
Continuously review and audit code and systems to ensure privileges are correctly assigned and no unnecessary privileges are granted.
The Principle of Least Privilege (PoLP) is a fundamental information security concept that dictates a user or entity should only have access to the specific data, resources, and applications needed to complete a required task. This principle applies not only to human users but also to non-human identities such as software processes, automated scripts, and API integrations.
Implementing PoLP offers significant benefits:
-
Minimizes Attack Surface: By restricting user and process permissions to only what is necessary, PoLP diminishes the avenues a malicious actor can use to access sensitive data or carry out an attack. It protects superuser and administrator privileges, reducing the overall threat profile.
-
Reduces Malware Propagation: Limiting privileges prevents malware from spreading laterally across a network by confining it to its entry point. This means if an attacker compromises a low-privilege account, their ability to escalate privileges or move to other systems is severely curtailed.
-
Enhances Operational Performance: Reductions in system downtime can occur due to a breach, malware spread, or incompatibility issues between applications are achieved by limiting the scope of potential incidents.
-
Safeguards Against Human Error: PoLP helps mitigate the impact of accidental misconfigurations, mistakes, or negligence by limiting the actions an individual can perform.
PoLP is also a fundamental pillar of Zero Trust Network Access (ZTNA) 2.0, providing fine-grained access control that dynamically identifies users, devices, applications, and their functions, regardless of network constructs.
Employ multiple, independent security controls, so that the failure of one does not compromise the entire system.
-
Layer security controls: Network firewalls, application-level firewalls (WAFs), input validation, output encoding, strong authentication, granular authorization, data encryption, regular vulnerability scanning.
-
Assume that any single control might fail and have backups.
-
Implement robust Identity and Access Management (IAM).
-
Ensure data is encrypted both at rest and in transit with secure key management.
No single security measure is foolproof. By stacking independent defenses, you significantly increase the effort an attacker needs to succeed, providing more opportunities to detect and stop them. This strategy involves creating a series of barriers that an attacker must overcome, thereby increasing the effort, time, and resources required to breach the system.
A well-architected security model includes firewalls, endpoint protection, network segmentation, and robust logging and monitoring tools. For instance, if an attacker bypasses an application's authentication system, a second layer, such as anomaly detection on login behavior, can still flag and mitigate the threat.
Anticipate and understand potential threats and vulnerabilities during the design phase, not as an afterthought.
-
Conduct threat modeling exercises (e.g., STRIDE, DREAD) early and continuously throughout the software development lifecycle.
-
Identify data flows, trust boundaries, and potential attack surfaces.
-
Document identified threats and corresponding mitigations.
-
Use threat modeling to inform security requirements and design decisions.
Building security in from the ground up is exponentially more effective and cost-efficient than trying to patch it on later. Proactive identification of threats allows for the implementation of robust, fundamental safeguards. It's a crucial component of any cybersecurity strategy, aiming to strengthen defenses against various security vulnerabilities by systematically reducing the attack surface. System hardening refers to the tools, methods, and best practices used to reduce the attack surface in technology infrastructure, including software, data systems, and hardware. It is a crucial component of any cybersecurity strategy, aiming to strengthen defenses against various security vulnerabilities.
Keep a watchful eye and a detailed log of all significant security events.
-
Implement comprehensive logging of security-relevant events: authentication attempts, authorization failures, data access, configuration changes, critical errors, and suspicious activity.
-
Ensure logs are immutable, protected, and stored securely.
-
Centralize logs for easier analysis.
-
Implement real-time monitoring and alerting for anomalous or malicious activity using solutions like Sentry, Datadog, New Relic.
-
Do not log sensitive data (passwords, PII) unless absolutely necessary and with strong justification and controls.
-
Restrict access to logs to only authorized individuals.
-
Log input validation failures.
-
Utilize a central routine for all logging operations.
Effective logging and monitoring are crucial for detecting security incidents, understanding their scope, conducting forensic analysis, and fulfilling compliance requirements. You can't fix what you don't know is broken or under attack. Misconfigured error handling can create severe vulnerabilities, so a secure system should fail securely, avoiding messages that disclose internal details. Integrating with Security Operations Centers (SOCs) elevates organizational security by enabling real-time monitoring and swift reaction to potential threats. Correlating application log data with other security information provides a holistic view of the threat landscape, enabling more effective threat detection and analysis. Striking a balance between security and functionality is crucial.
When secrecy or integrity is needed, use cryptography correctly and responsibly.
-
Do not roll your own crypto. Use well-vetted, industry-standard cryptographic libraries and algorithms.
-
Choose appropriate algorithms for the task (e.g., strong hashing for passwords, authenticated encryption for data at rest/in transit).
-
Use sufficiently long and random keys/IVs.
-
Protect private keys and secrets rigorously.
-
Understand and implement proper key management practices (generation, storage, rotation, revocation).
-
Always use HTTPS/TLS for all network communication where confidentiality or integrity is required.
Misusing cryptography can lead to a false sense of security, often making systems less secure than using no crypto at all. Rely on established experts and practices. Secure configurations, like using HTTPS, are fundamental to protecting data in transit. This rule highlights the critical importance of cryptographic best practices to ensure data confidentiality, integrity, and authenticity.
Manage user sessions meticulously to prevent hijacking and unauthorized access.
-
Use Secure Session IDs: Session identifiers must be random and unpredictable, long and unique, generated using a secure, random algorithm with sufficient entropy (e.g., 128-bit or 256-bit randomness).
-
Enforce Session Expiration: Sessions should expire after a certain amount of time (idle and absolute timeouts), even if the user remains active. This limits the window of opportunity for an attacker to hijack an active session.
-
Implement Account Lockout Mechanisms: After a certain number of failed login attempts, implement account lockout mechanisms to prevent brute-force attacks.
-
Utilize Rate Limiting: Implement rate limiting on login attempts and other sensitive operations to prevent automated attacks and reduce the risk of brute-force attempts.
-
Enforce Strong Password Policies: Require complex passwords, combining letters, numbers, and special characters, and encourage multi-factor authentication (MFA) for added security.
-
Always Use HTTPS/TLS: Transmit all login information and sensitive data over HTTPS, not HTTP, to prevent man-in-the-middle attacks and ensure confidentiality and integrity. Failed TLS connections should never fall back to insecure connections.
-
Password Hashing and Salting: Store user passwords as hashed values with a unique salt for each password to make it difficult for attackers to use rainbow tables to crack them.
User sessions are critical for maintaining continuity in application interactions, but they also represent a significant attack vector if not managed securely. Secure session management prevents attackers from impersonating legitimate users, protects against brute-force attacks, and ensures the confidentiality and integrity of session data. Proper session management is fundamental to preventing common web vulnerabilities like session hijacking.
Vigilantly protect your application from vulnerabilities introduced through third-party components and development pipelines.
-
Implement a Software Bill of Materials (SBOM): Maintain a comprehensive, dynamic inventory of all software components (open-source and proprietary code, dependencies, libraries). Use automated SBOM generation tools for continuous tracking, and regularly scan components for vulnerabilities using Software Composition Analysis (SCA) tools.
-
Secure Open-Source and Third-Party Software: Only use packages from trusted, reputable repositories. Establish strict validation processes for all third-party libraries and tools, including scanning, monitoring, and verifying their security posture. Require multi-factor authentication (MFA) for developers accessing repositories.
-
Strengthen CI/CD Pipeline Security: Use code signing to ensure integrity. Enforce least privilege access for automation tools. Scan repositories for hardcoded secrets. Regularly test and audit the CI/CD pipeline for vulnerabilities.
-
Implement Secure Coding Practices: Foundational secure coding practices are critical for mitigating internal development risks.
-
Establish Vendor Risk Management: Regularly audit third-party vendors to ensure they adhere to required security standards and assess their security protocols, infrastructure, and compliance.
Modern software heavily relies on open-source components and third-party libraries, introducing significant supply chain risks. Securing the software supply chain is paramount to protecting applications from vulnerabilities and malicious attacks introduced before your code even runs. A comprehensive SBOM improves visibility, while securing dependencies and CI/CD pipelines ensures integrity and reduces attack vectors from external or automated sources. This rule addresses the growing attack surface introduced by complex software ecosystems.
Treat APIs as first-class citizens in your security strategy, securing every interaction point.
-
Implement Strong Authentication and Authorization: Enforce proper authorization checks to ensure authenticated clients have necessary permissions. Use granular access controls for sensitive API endpoints, data, objects, and functions. Address Broken Object Level Authorization and Broken Object Property Level Authorization.
-
Validate Input and Output Encoding: Validate and sanitize all input received from API clients. Encode output appropriately to prevent malicious script execution.
-
Use Secure Communication: Employ secure protocols (HTTPS/TLS) for transmitting data and encrypt sensitive information in transit and at rest.
-
Implement Rate Limiting and Throttling: Enforce limits on the number of requests that API clients can make within a specified time frame to prevent excessive usage, Distributed Denial of Service (DDoS) attacks, and brute-force attempts.
-
Conduct Regular Security Testing and Auditing: Perform regular security assessments, penetration testing, and code reviews to identify and address potential vulnerabilities in APIs. Conduct security audits to detect weaknesses and ensure compliance with industry standards.
-
Enforce Schema and Protocol Compliance: Automatically create and enforce a positive security model with OpenAPI specifications to ensure a consistent security policy for APIs.
-
Dynamically Discover and Continuously Assess APIs: Constantly inventory and protect APIs (including "shadow APIs" or undocumented/unmanaged APIs) using zero trust and least privilege. Security controls need to constantly inventory and protect APIs using zero trust and least privilege access paradigms to mitigate unforeseen risks from third-party interdependencies.
-
Comprehensive Threat Detection: Protect APIs against exploits, misconfiguration, bots, fraud, and abuse.
APIs are the backbone of modern interconnected applications and represent a significant attack surface. Their proliferation introduces unique security challenges that require dedicated attention to prevent data breaches, unauthorized access, and service disruptions. Robust authentication, authorization, input validation, and continuous monitoring are essential to mitigate the diverse threats targeting APIs. This rule emphasizes the need for a holistic approach to API security, from design to deployment and ongoing operations.
Throw exceptions (fail hard) for interactive input, degrading gracefully for non-interactive input.
-
For interactive input, meet invalid input with immediate and unambiguous failure (fail hard).
-
For non-interactive processes (e.g., batch jobs, background tasks), implement graceful degradation or robust logging mechanisms.
This principle ensures that direct user interactions, which often represent immediate security risks, are met with immediate and unambiguous failure when invalid input is provided. For non-interactive processes, graceful degradation ensures continuity of service while still flagging issues for later review. This approach balances responsiveness for critical user actions with resilience for background tasks.
Avoid ambiguity. In multi-tier software, ensure that every tier parses input identically.
- Apply identical parsing of input across all tiers of multi-tier software.
This prevents vulnerabilities such as HTTP Request Smuggling and HTTP Parameter Pollution, where different interpretations of the same request by various components (e.g., load balancer, web server, application server) can lead to bypasses or data manipulation. Consistent parsing across all layers eliminates this vector of attack, ensuring uniformity and preventing unexpected behavior.
Use read-once ValueObjects for sensitive data.
- Design
ValueObjectsfor sensitive information (e.g., cryptographic keys, one-time tokens, personal identifiers) to be consumed or invalidated after their first legitimate use.
The primary purpose of such objects is to facilitate the detection of unintentional or multiple uses of sensitive information. By designing these objects to be consumed or invalidated after their first legitimate use, the risk of accidental exposure or reuse is significantly reduced. This practice enhances memory safety and reduces the attack surface for sensitive data.
This manifesto provides a robust framework for building and maintaining secure applications by emphasizing proactive security measures embedded throughout the software development lifecycle. The principles outlined, from the foundational "Absolute Zero" approach to minimizing attack surface, to the meticulous handling of data at "sinks," and the adoption of "Parse, don't validate" for input, collectively aim to reduce vulnerabilities and enhance resilience.
The deeper considerations reveal that application security is not merely a technical challenge but also an economic and operational imperative. The financial ramifications of unaddressed "dead code" and the increasing complexity of securing distributed systems underscore the need for continuous vigilance and adaptive strategies. Furthermore, the reliance on third-party components necessitates robust supply chain security practices, while the pervasive nature of APIs demands specialized attention to their authentication, authorization, and data handling.
To effectively implement these principles, organizations are recommended to:
-
Integrate Security by Design: Embed security considerations from the earliest stages of design and threat modeling, making it a shared responsibility across development, operations, and business teams.
-
Automate and Continuously Monitor: Leverage automated tools for static and dynamic analysis, vulnerability scanning, and continuous monitoring of logs and system activity. Automation not only increases efficiency but also reduces human error in identifying vulnerabilities.
-
Prioritize and Remediate Risks: Adopt a risk-based approach to prioritize the remediation of vulnerabilities based on their impact, exploitability, and exposure. Develop detailed remediation plans, including immediate fixes and long-term solutions.
-
Invest in Developer Education: Continuously educate developers on secure coding practices, common security pitfalls, and the importance of principles like least privilege and data encryption.
-
Foster a Security-First Culture: Encourage every team member to think critically about potential vulnerabilities and prioritize secure coding practices from day one.
By adhering to these principles and recommendations, organizations can build a solid foundation of user trust, enhance operational performance, and safeguard critical data and systems against an evolving threat landscape.