0xIvan

        • Building an AI Guardrail with Embeddings
        • LokiBot Analysis

    Home

    • security
    • AI

    • Building an AI Guardrail with Embeddings

      Jan 02, 2026 — LLMs are powerful, but they’re vulnerable. Prompt injection attacks can trick models into ignoring instructions, leaking data, or doing things they sh...
      • machine-learning
      • AI
      • security
      • prompt-injection
      • embeddings
    • LokiBot Analysis

      May 08, 2022 — Brief Introduction The initial delivery was via email, however this post is about analyzing the delivery stages, malware and some SECOPS fails from th...
      • malware
      • reverse-engineering
      • security

    Graph View

    Backlinks

    • No backlinks found
    • GitHub
    • X
    • RSS