<RETURN_TO_BASE

On-Course Intelligence: Networking Powers Real-Time AI at the 2025 Ryder Cup

'At the 2025 Ryder Cup HPE deployed an AI-ready network and on-site cloud to process live feeds from cameras, sensors, and systems for instant operational decision-making and edge inference.'

Event-scale networking challenge

The 2025 Ryder Cup brought nearly a quarter of a million spectators to Bethpage Black in Farmingdale, New York, and with them an enormous demand on the venue's IT and networking systems. Managing tens of thousands of devices, cameras, sensors, point-of-sale terminals, and staff workstations over three days created a real-world test of what it takes to run AI and real-time analytics at scale.

A central hub for operational decision-making

To tackle the complexity, Ryder Cup organizers partnered with HPE to build a connected operational hub. The platform collected diverse, real-time data streams and fed them into a private-cloud environment and a high-performance network. The result was a dashboard that aggregated telemetry from ticket scans, weather feeds, GPS-tracked golf carts, concession and merchandise sales, queue lengths, and network performance to give staff an instantaneous view of activity across the grounds.

Jon Green, CTO of HPE Networking, stresses that networking is the third pillar of AI deployment alongside models and data. 'Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,' he explains. The Ryder Cup showed how crucial the network is when inference and rapid response are required.

Designing networks for inference

Traditional enterprise networks are built for predictable application flows like email and file sharing. AI workloads, especially inference, demand a different profile: ultra-low latency, lossless throughput, specialized hardware, and the ability to move very large datasets between GPUs with precision. Small delays or packet loss that go unnoticed in typical office apps can bottleneck an AI inference pipeline and derail time-sensitive processing.

To address these needs, the Ryder Cup deployment used a two-tiered architecture. Over the venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user-experience sensors provided broad, dense connectivity where crowds gathered. That front-end layer connected cameras and sensors to capture live video and motion data. A back-end layer, hosted in a temporary on-site data center, linked GPUs and servers in a high-speed, low-latency configuration to serve as the system's brain and run live analytics.

That architecture enabled both immediate operational responses on the ground and collection of data that could inform future planning. Alongside sensors, 67 AI-enabled cameras analyzed footage so models could surface the most interesting shots and other meaningful events in near real time.

Edge inference and the return of on-prem intelligence

The Ryder Cup example highlights a larger trend for physical AI, where decision latency can have consequences for safety and operations. Use cases such as autonomous vehicles or automated factory machinery require inference to happen close to the data source. Sending sensor data to a remote cloud for inference introduces round-trip delays that are unacceptable for split-second decision making.

This has driven a hybrid approach: heavy, data-intensive model training still happens in centralized clouds, but inference moves toward edge clusters and on-prem deployments. Enterprises are reevaluating where workloads should live, balancing speed, security, sovereignty, and cost. Research and market forecasts suggest many organizations are already adjusting deployment strategies as AI grows.

AI-driven networking and self-driving infrastructure

The relationship between networking and AI runs both ways. Networks generate massive telemetry, making them ideal candidates for AI-driven optimization. At HPE, models analyze anonymized telemetry from billions of devices and a trillion telemetry points per day to discover patterns that improve performance and stability.

AIOps is already surfacing actionable recommendations to administrators, and the next step is increasingly autonomous networks that can test and apply low-risk changes automatically. That vision of a self-driving network aims to remove repetitive, error-prone tasks from engineers, enabling simple, large-scale configuration changes and automated fixes for common connectivity issues.

Why this matters beyond sports

Whether coordinating a major live event or running a distributed industrial system, the performance of the network increasingly determines business outcomes. The Ryder Cup deployment is a concrete example of how investment in AI-ready networking and edge inference unlocks real-time intelligence. Organizations that build this foundation will be better positioned to move from pilots to scaled AI operations.

🇷🇺

Сменить язык

Читать эту статью на русском

Переключить на Русский