Free-view video lets viewers choose their camera parameters when watching a recorded or live event; they can interactively control the camera view and choose to focus on different parts of the scene. This paper presents a novel client-server architecture approach for free-view videos of sports. The clients obtain a detailed 3D representation of the players and the game field from the server of a shared repository. The server receives video streams from several cameras around the game field, detects the players, determines the camera with the best view, extracts the poses of each player, and encodes this data with a timestamp into a snapshot, which is streamed to the clients. A client receives a stream of snapshots, applies each pose to the appropriate player's 3D model (avatar), and renders the scene according to the user's virtual camera. We have implemented our approach while using VIBE [Kocabas et al. 2020] for pose extraction and obtained promising results. We transferred a soccer game into a 3D representation supporting free-view with a reconstruction error below . Our unoptimized implementation is nearly real-time; it runs at about 30 frames/second.