# I'm Joe Infrastructure Manager with long-standing Linux experience, starting with Slackware in the 1990s. Over the years, I’ve worked across a wide range of systems and environments, dealing with the usual realities of dependencies, packaging, and long-term maintenance. Linux remains a constant interest for me. I regularly test distributions, explore different setups, and lately that’s extended into evaluating laptops for real-world distro compatibility and reliability. ## What I Do I manage infrastructure that spans legacy systems, cloud services, Linux administration, CI/CD pipelines, and both microservice-based and monolithic applications. I work with Nix and NixOS where they make sense, valuing reproducibility and clarity, but without treating them as an identity or a lifestyle. Like most infrastructure work, my focus is pragmatic: what’s maintainable, understandable, and stable over time. ## AI, Disinformation, and Manipulation I’m interested in AI less as a speculative future problem and more as a present-day amplifier of existing issues—especially disinformation and manipulation. Generative systems make it easier to produce convincing narratives at scale, lower the cost of coordinated misinformation, and enable individuals with narcissistic or manipulative tendencies to project influence far beyond what was previously possible. This isn’t abstract or hypothetical; these tools already exist and are already being used this way. What concerns me isn’t intelligence itself, but leverage: how automation reduces friction for bad actors while increasing the difficulty of verification and trust. Detection tends to lag generation, and social systems are slower to adapt than technical ones. ## Practical Value of Skepticism I have a strong bias toward substance over presentation. I’m attentive to gaps between claims and reality, whether that shows up in vendor pitches, architectural proposals, or organizational narratives. That perspective is useful in infrastructure work, procurement, and stakeholder conversations, where clarity and accountability matter more than polish. Applied to AI systems, this translates into a focus on misuse potential, incentive alignment, and downstream effects—rather than abstract optimism or fear. ## The Bottom Line My work centers on keeping systems understandable, resilient, and grounded in reality. Whether that’s infrastructure, operating systems, or emerging tools like AI, the goal is the same: reduce unnecessary complexity, surface real risks early, and avoid being distracted by hype or theatrics.