Digital border tax on digital services
The digital economy has become the main event. As more business processes migrate online, governments are stepping in to make the tax system catch up. If you’re a tech or platform company operating across borders, expect new tariffs on digital services. These measures are being introduced under what many are calling “digital border taxes.” The rationale is difficult to argue with: if tech giants are profiting from users in a country, then that country wants its fair share in taxes.
Australia provides a clear example. Companies like Google and Facebook generated AU$15 billion in digital revenue locally, but only paid AU$254 million in taxes. That kind of gap draws a lot of attention. And it’s not just Australia. Several governments are working on similar initiatives. The direction here is obvious, digital services will face the same scrutiny as physical goods once did.
This shift will force a restructuring of how digital transactions are tracked and billed. Expect new intermediaries to emerge, offering usage-based billing services. These will track digital consumption, calculate local tariffs, and automate collections. It’s similar to how data plans used to work before everything became “unlimited.”
For C-suite decision-makers, this is a policy pivot that requires agile infrastructure and compliance teams. Legal departments need to understand local taxation frameworks, while product and finance teams need systems capable of tracking usage by territory. The era of one-size-fits-all digital pricing models is fading. What replaces it will be more granular and region-specific.
Emergence of neocloud providers
The cloud isn’t going anywhere. But the way we build and use it is evolving fast. What you’re seeing now is a clear shift from traditional hyperscalers, Amazon, Microsoft, Google, toward a new generation of providers. These are the “Neoclouds.” They’re not trying to match the old playbook. They’re designing cloud infrastructure around artificial intelligence from the ground up.
Neoclouds are focused on two things: performance and clarity. Most of them use GPU-based computing to handle today’s AI workloads natively, no conversion, no complexity. They’re not treating AI as an add-on. It’s the base layer. That makes a big difference in developer experience. Simpler APIs. Better transparency in pricing. Faster iterating of models. These are things developers and product teams actually need.
For large enterprises, this trend signals something worth paying attention to. The hyperscalers are still dominant by scale, but they’ve become bloated. Their cloud offerings often take more time to deploy and cost more than expected. Developers are not happy with environments that feel sluggish or hard to customize. Neoclouds are nimble and purpose-built, making them attractive to companies building around AI from day one.
If you’re leading a digital-first business, or aiming to become one, you’ve got decisions to make. Do you optimize for deep legacy integration, or shift toward specialized, high-performance AI infrastructure? Neoclouds won’t replace hyperscalers outright, but they’re creating new expectations. Usability, transparency, and raw performance will become key differentiators. Don’t wait too long to factor that into your strategic planning.
Agentic AI as a new cybersecurity vulnerability
AI isn’t just generative now. It’s becoming agentic, able to take action on behalf of users without continuous prompts. That changes the playing field. These autonomous systems are beginning to power connected devices at every level, from enterprise infrastructure to consumer electronics. The catch is simple: more autonomy means more trust, and more trust means more risk.
Agentic AI requires broad permissions. It accesses data, executes commands, and coordinates digital processes, often unsupervised. If you’re not managing those permission sets tightly, you’re opening the door to manipulation. Malicious actors don’t need to break systems anymore; they just need to convince an AI to act destructively. That can mean data exfiltration, workflow manipulation, or unauthorized automation at critical points in your infrastructure.
Microsoft has already flagged these concerns, particularly in relation to CoPilot, their AI productivity system. They’re not alone. As these technologies go mainstream, new attack vectors are going to emerge fast. Most organizations aren’t built to handle this kind of dynamic threat environment. Traditional access controls fall short because the AI itself is often the user being granted access.
For C-suite executives, this is a strategic issue. AI isn’t just a tool, it’s becoming part of the operational layer. Security models must evolve alongside it. That means auditing privilege structures for non-human actors, expanding zero-trust policies to autonomous processes, and investing in real-time anomaly detection. If you’re embedding AI deeply into your operations, securing those layers isn’t optional, it’s foundational.
Rise of AI veganism and ethical resistance
A new wave of digital ethics is emerging, not from regulators, but from your own users and employees. “AI veganism” is what some are calling it. People are choosing to opt out of AI tools completely. Their reasons vary: data privacy concerns, fears over algorithmic bias, and the environmental impact of energy-intensive AI systems.
This group is still a minority, but it’s visible and vocal, and growing. You’re going to see demand for products with transparency controls, AI-free modes, or full human oversight. Some brands have already started positioning themselves with AI-free labels, using ethics as a differentiating point in the market. In sectors tied to personal data or creative work, publishing, education, healthcare, this shift matters.
For most companies, especially in cybersecurity and scaled tech infrastructure, avoiding AI entirely isn’t realistic. The performance and protection gains are too significant to ignore. But here’s the nuance: offering opt-outs or transparent control points isn’t just customer service, it reduces exposure. Users who feel coerced into AI interactions are more likely to challenge outcomes, creating regulatory and reputational risks.
Executives should treat this as a product and policy design issue. Build optionality into your AI experiences. Make clear when and how AI is used. And prepare legal and operations teams to handle emerging liability discussions. The ethical use of AI isn’t only about values, it’s becoming part of how competitive, compliant, and credible your company looks to the outside world.
Expansion of the global side-hustle economy
AI isn’t just disrupting traditional workflows, it’s changing how people value their time and income streams. Millions of professionals are now using AI tools to spin up content, generate code, design assets, or offer services on-demand. These aren’t hobbyists. Many are building sustainable side businesses powered by low-cost, high-capability AI platforms.
This shift is being driven by access. Subscription-based AI tools have drastically lowered the barrier to entry. You don’t need infrastructure. You don’t need a team. You need an idea and a toolset that’s available on a flexible license. That model unlocks economic participation at the individual level across geographies, and it scales fast.
The challenge for traditional enterprises is what happens inside organizational boundaries. Employees who engage in this “shadow AI” economy often do so without disclosing where and how these tools are used. That creates issues around data handling, IP leakage, and productivity measurement. You’re also going to lose talent, not to direct competitors, but to decentralized autonomy. People are choosing efficiency and ownership over legacy structure.
From a leadership perspective, ignoring this trend is not a viable strategy. Enterprises need policies that address external monetization, internal use of AI, and intellectual property protections. At the same time, there’s opportunity here. Smart companies will absorb this behavior, encourage innovation, reward output, and adapt policies to retain high performers who want flexible models. It’s not disruption. It’s transformation at the workforce level, and your systems need to account for it.
Increased threats to identity and access through account poisoning
Digital security is about watching what goes through them. Financial institutions are already facing an emerging attack model called “account poisoning.” This is where criminals manipulate automated banking flows, especially APIs and payment routing systems, to divert funds to unauthorized destinations. It’s precise, scalable, and hard to detect if your oversight is weak.
This type of fraud often begins with altering payee details at points in the payment chain. If that data gets injected early enough, it travels through the system unnoticed. The criminal might split transactions or use intermediaries to obscure the ownership chain. That makes it a form of supply chain attack, except the supply chain is financial infrastructure, not physical goods.
These incidents are difficult to catch unless your institutions have real-time anomaly detection tied to disbursement controls. Automation works both ways, it accelerates legitimate transactions and also masks malicious ones. The more your operations rely on machine-to-machine communication, the more critical it becomes to validate not just the transaction’s origin, but also its behavior patterns over time.
For executives, the implications go beyond finance. Account poisoning exposes fundamental vulnerabilities in enterprise process automation. If you haven’t coordinated cybersecurity, finance, and DevOps around shared transactional integrity, you’re running an open risk. Solutions lie in layered access control, behavioral monitoring, and visibility across all stages of the payment pipeline. You won’t catch these attacks manually. You need adaptive systems that understand something is off before the money is gone.
The decline of VPNs in favor of zero-trust security models
VPNs used to be the default answer to secure remote access. That’s no longer the case. Today, they’re more of a liability than a solution. Attackers have become highly effective at compromising VPN credentials or exploiting vulnerabilities in outdated VPN appliances. Once inside, they move laterally through networks with little resistance. That behavior is well-documented, and preventable.
The security model underpinning VPN usage assumes too much trust based on a single authentication event. That’s not how modern networks operate anymore. Teams are distributed. Devices change. Threats move faster than static certificates can keep up. The industry is responding by shifting to zero-trust architecture, where no access is assumed and every transaction is continuously verified.
Zero-trust isn’t marketing. It’s execution. You build policies and technology around identity verification, device posture, and behavioral context. That means replacing perimeter-based access tools with systems that adapt in real-time. Privileged access management, multi-factor authentication, and context-aware enforcement become core to how infrastructure is accessed and managed.
Executives responsible for security infrastructure should stop treating VPNs as a sunk cost and start phasing them out strategically. Critical systems, especially in large enterprises, should not be accessible through flat, binary-access tunnels. The investment in zero-trust won’t only reduce breach risk, it will align your security posture with how your teams actually operate in today’s environment.
Convergence of physical and digital threats
The gap between digital and physical security is closing fast. Consumer-grade devices like AirTags, originally designed for asset tracking, are now being co-opted by bad actors. These tools are inexpensive, widely available, and easy to conceal. That combination makes them ideal for unauthorized surveillance and targeting.
Bad actors can place these devices on high-value cargo, personal assets, or individuals. Once paired with basic mobile connectivity and open-source intelligence, they provide near real-time tracking with minimal resource investment. This is no longer a theoretical threat; several documented cases already show misuse of these devices for criminal activity and reconnaissance.
What this means for enterprise security is straightforward, physical security protocols need digital awareness, and vice versa. If your organization manages mobile teams, intellectual property, or sensitive materials, it’s not enough to only monitor the network. Asset monitoring, access controls, and deviation alerts must account for embedded or attached tracking devices as part of routine security sweeps.
Leadership should frame this as a convergence issue. Security teams can no longer operate in separate silos. Physical and digital assets are interconnected, and so are the threats targeting them. Building integrated detection and response strategies that cover both domains isn’t optional. It’s what’s required to protect people and property in a world where commercial tools can be turned hostile with minimal effort.
The bottom line
The ground under cybersecurity and enterprise architecture isn’t just shifting, it’s rebuilding in real time. AI is no longer optional. Legacy systems like VPNs are becoming risk magnets. Consumer behavior is being shaped by ethics as much as performance. Regulatory models are catching up to digital business at scale. And the talent landscape is being decentralized by automation.
For executives and business leaders, the key isn’t to chase every trend. It’s to recognize which ones intersect directly with your infrastructure, threat posture, and ability to scale without absorbing unnecessary complexity. This isn’t about future-proofing five years out, it’s about staying functional and competitive over the next 18 months.
Build for agility. Secure for autonomy. Invest in transparency, internally and externally. And expect resistance not just from attackers, but sometimes from your own users, teams, and tooling. That’s not friction. That’s feedback. Use it.


