Pedro Alonso

I Let GitHub Copilot Remotely Debug My Live Ubuntu Server

11 min read
I Let GitHub Copilot Remotely Debug My Live Ubuntu Server

We’re all getting used to AI assistants like GitHub Copilot writing boilerplate code, explaining functions, and even drafting unit tests. But can they go further? Can they help with the messy, real-world problems of server administration and DevOps? I decided to find out.

I had a classic DevOps headache: my Node.js application, hosted on a Dokku PaaS, was losing all its logs with every new deployment. This is a non-starter for any serious application where you need to debug past issues.

Instead of diving into Stack Overflow and documentation myself, I opened a chat with GitHub Copilot and posed a challenge. I decided to treat it like a junior DevOps engineer, but with a critical set of ground rules:

  1. AI as the Brain: It would diagnose the problem and propose commands.
  2. Me as the Hands: I would be the secure, human-in-the-loop intermediary. I would execute its commands on my live server and feed back the results.
  3. Safety First: The AI would never get direct SSH credentials or the ability to execute code on its own.

With these rules in place, our session began.

The Initial Prompt: Setting the Scene

I started with a very direct prompt, giving it all the necessary context: my server setup, the app name, the problem, and the initial SSH command to prove it could “connect.”

by running ssh -p 2222 -i ~/.ssh/id_rsa root@YOUR_SERVER_IP "dokku apps:list" you can connect to my server as you can see I use dokku for hosting, also the app I use is ‘my-app’ at the moment the logs are not in a volume and when I re-deploy I lose them, can you help me identify how to persist them in a volume so they’re not deleted only rotated? I use ubuntu, make a plan of action / analysis, don’t do changing commands without my approval

What happened next was fascinating. It didn’t just spit out a generic solution. It created a diagnostic plan.

Phase 1: The AI’s Investigation

Copilot’s first response was to gather information. It wanted to understand the environment completely before recommending any changes. It sent me a list of commands to run, which I executed one by one, pasting the output back into the chat.

Here is the exact transcript of our “diagnostic session”:

AI: I’ll help you analyze the logging situation and create a plan to persist logs using Dokku volumes. Let me first gather some information about your current setup.

Me (running the AI’s commands):

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku apps:list"
=====> My Apps
my-app
$ ssh root@YOUR_SERVER_IP "dokku storage:list my-app"
# (No output, as expected)
$ ssh root@YOUR_SERVER_IP "dokku config:show my-app | grep -i log"
LOG_LEVEL: info
$ ssh root@YOUR_SERVER_IP "dokku ps:report my-app"
=====> my-app ps information
Processes: 2
...
Status web 1: running (CID: d53799169cb)
Status worker 1: running (CID: 18d50a8e1f2)
$ ssh root@YOUR_SERVER_IP "dokku logs:report my-app"
=====> my-app logs information
Logs computed max size: 10m

The AI wasn’t done. It then drilled down into the Docker layer to find the exact location of the ephemeral log files.

AI: Perfect! Now let me check where Dokku currently stores logs:

Me (running the AI’s commands):

Terminal window
$ ssh root@YOUR_SERVER_IP "docker inspect my-app.web.1 --format '{{.LogPath}}'"
/var/lib/docker/containers/d53799169cb.../[container-id]-json.log
$ ssh root@YOUR_SERVER_IP "ls -lh /var/lib/docker/containers/d53799169cb.../*.log"
-rw-r----- 1 root root 481 Oct 31 11:41 /var/lib/docker/containers/d53799169cb...-json.log

This diagnostic phase was incredible. The AI knew exactly which Dokku and Docker commands to run to build a complete picture of the problem. It confirmed that no volumes were mounted and located the temporary log file inside the volatile container directory.

Phase 2: The AI’s Analysis and Action Plan

After I provided all the outputs, the AI synthesized the information and presented a clear analysis and a comprehensive, multi-step solution.

📊 Analysis Complete

Current Situation

  • App name: my-app
  • Two processes: web and worker
  • Logging: Uses pino logger writing to stderr
  • Current volumes: None mounted
  • Problem: Logs are stored in /var/lib/docker/containers/[container-id]/, which gets wiped on each deployment.

🎯 Initial Solution Plan I recommend creating a dedicated log volume, configuring your app to write logs to files in that volume, and setting up logrotate on the host for maintenance.

It then laid out a detailed, four-phase plan complete with the exact commands to run and the code changes to make.

Plot Twist: The AI Learns and Adapts

The AI’s initial plan was solid. It was a classic, by-the-book approach that would have absolutely worked. But it felt a bit… manual. I had a hunch Dokku might have a more elegant, built-in solution.

This was the perfect test. Could the AI move beyond a generic solution if I gave it more specific context? I fed it Dokku’s official documentation on logging and asked it to review its plan.

What happened next was astounding. It didn’t just tweak its answer; it threw the entire plan out and started over.

Wait! After reading the Dokku documentation, I need to revise my recommendation.

🎯 Revised Recommended Approach

Use Dokku’s Vector integration - it’s purpose-built for log shipping and persistence, and it’s already integrated into Dokku. This is much cleaner and more maintainable than custom volume mounting.

Why Vector is Better:

  • ✅ Built into Dokku (no custom hacks needed)
  • ✅ Doesn’t require app code changes
  • ✅ Can ship logs to multiple destinations (file, S3, cloud logging services, etc.)
  • ✅ Automatic log collection from all containers
  • ✅ Won’t be lost on redeploys
  • ✅ Professional-grade log shipping solution

This is a crucial moment. The AI didn’t just stick with its initial solution—it admitted when a better approach existed. This is exactly what you want from a consultant: someone who continues to learn and refine their recommendations based on new information.

The Revised Solution: Using Dokku’s Vector Integration

Here’s the much cleaner, production-ready solution the AI proposed after learning about Dokku’s built-in capabilities.

The Real Implementation: A Live Session

What follows is the actual, unedited conversation where the AI guided me through the implementation. I’ve anonymized the sensitive data, but this is exactly how it happened—commands, outputs, troubleshooting, and all.

AI: Perfect! Let’s implement this step by step. I’ll execute each phase and show you the results.

Me:

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku logs:vector-start"
=====> Starting vector container
Ensuring vector configuration exists
Starting vector container
vector Pulling
vector Pulled
Container vector-vector-1 Creating
Container vector-vector-1 Created
Container vector-vector-1 Starting
Container vector-vector-1 Started
Waiting for 10 seconds
Vector container is running

Success! Vector downloaded the Docker image and started running. The AI was tracking progress and immediately knew to move to the next step.

Step 2: Configure Vector Sink (With a Learning Moment)

AI:

Now let’s configure Vector to write logs to a persistent file on the host. Run this command.

I ran the command it suggested, but we immediately hit our first snag.

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku logs:set my-app vector-sink 'file://?path=/var/log/dokku/apps/%Y-%m-%d-{{ app }}.log&encoding[codec]=json'"
! invalid URL escape "%Y-"

The server rejected the command. Without missing a beat, Copilot diagnosed and fixed its own mistake.

AI:

Ah, my apologies. The special characters in the date format need to be URL-encoded. Let me correct that for you. Please run this revised command.

I executed the corrected version:

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku logs:set my-app vector-sink 'file://?path=/var/log/dokku/apps/my-app-%25Y-%25m-%25d.log&encoding[codec]=json'"
=====> Setting vector-sink to file://?path=/var/log/dokku/apps/my-app-%25Y-%25m-%25d.log&encoding[codec]=json
Writing updated vector config to /var/lib/dokku/data/logs/vector.json

Success! This was a perfect example of self-correction. The AI didn’t just provide an answer; it debugged its own process. The %25 is the URL-encoded version of %, which allows the date placeholders to work properly.

AI:

Perfect! Now let’s verify Vector picked up the configuration and is actively watching your containers.

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku logs:vector-logs --num 30"
-----> Vector container logs
! 2025-10-31T12:40:36.048991Z INFO vector::config::watcher: Configuration file changed.
! 2025-10-31T12:40:36.050261Z INFO vector::topology::running: Reloading running topology with new configuration.
! 2025-10-31T12:40:36.050681Z INFO source{component_kind="source" component_id=docker-source:my-app component_type=docker_logs}: vector::sources::docker_logs: Capturing logs from now on.
! 2025-10-31T12:40:36.050729Z INFO source{component_kind="source" component_id=docker-source:my-app component_type=docker_logs}: vector::sources::docker_logs: Listening to docker log events.
! 2025-10-31T12:40:36.056404Z INFO source{component_kind="source" component_id=docker-source:my-app component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=4d646498657...
! 2025-10-31T12:40:36.057435Z INFO source{component_kind="source" component_id=docker-source:my-app component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=d53799169cb...

Excellent! Vector is now actively watching both containers (web and worker). But are logs actually being written?

Terminal window
$ ssh root@YOUR_SERVER_IP "ls -lh /var/log/dokku/apps/"
total 0

No files yet. The AI didn’t panic—it knew we needed to trigger some activity.

AI:

The directory exists but no logs yet because there hasn’t been any new activity. Let’s restart the app to generate some logs and verify the pipeline is working.

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku ps:restart my-app"
-----> Deploying web (count=1)
Attempting pre-flight checks (web.1)
=====> Start of my-app container output (ae22d9062b4c web.1)
{"level":30,"time":1761914485346,"pid":207,"msg":"Initializing web server..."}
{"level":30,"time":1761914485356,"pid":207,"msg":"Web server listening on port 5000"}
=====> End of my-app container output (ae22d9062b4c web.1)
-----> Deploying worker (count=1)
=====> Start of my-app container output (9883b7e2107b worker.1)
{"level":30,"time":1761914503311,"pid":205,"msg":"Initializing worker..."}
{"level":30,"time":1761914503312,"pid":205,"msg":"Starting worker orchestrator..."}
=====> End of my-app container output (9883b7e2107b worker.1)

The app restarted successfully and generated logs. Now for the moment of truth:

Terminal window
$ ssh root@YOUR_SERVER_IP "ls -lh /var/log/dokku/apps/"
total 44K
-rw-r--r-- 1 root root 44K Oct 31 12:41 my-app-2025-10-31.log

🎉 It’s working! The log file appeared automatically with today’s date in the filename.

AI:

Excellent! The logs are being captured. Let’s verify the content is structured properly.

Terminal window
$ ssh root@YOUR_SERVER_IP "tail -5 /var/log/dokku/apps/my-app-2025-10-31.log | jq -r '.message'"
{"level":30,"time":1761914503563,"pid":205,"msg":"Processing new failed jobs",...}
{"level":30,"time":1761914503588,"pid":205,"msg":"Refund processing completed",...}
{"level":30,"time":1761914509446,"pid":204,"msg":"Fetching queued jobs stats",...}

Beautiful! The logs are in JSON format, structured and queryable. What’s brilliant here:

  • Vector captured logs from both the web and worker containers
  • Logs persist on the host filesystem (survive deployments)
  • Date-based filenames for easy organization
  • Zero application code changes needed

Step 3: Set Up Log Rotation

AI:

Now let’s set up log rotation to prevent these log files from filling up your disk. We’ll use logrotate, which is the standard Linux utility for this.

Terminal window
$ ssh root@YOUR_SERVER_IP "cat > /etc/logrotate.d/dokku-apps << 'EOF'
/var/log/dokku/apps/*.log {
daily
rotate 30
compress
delaycompress
notifempty
missingok
copytruncate
maxsize 100M
}
EOF"
$ ssh root@YOUR_SERVER_IP "cat /etc/logrotate.d/dokku-apps"
/var/log/dokku/apps/*.log {
daily
rotate 30
compress
delaycompress
notifempty
missingok
copytruncate
maxsize 100M
}

AI:

Perfect. Now let’s verify the configuration is valid by running logrotate in debug mode.

Terminal window
$ ssh root@YOUR_SERVER_IP "logrotate -d /etc/logrotate.d/dokku-apps"
reading config file /etc/logrotate.d/dokku-apps
Handling 1 logs
rotating pattern: /var/log/dokku/apps/*.log after 1 days (30 rotations)
empty log files are not rotated, log files >= 104857600 are rotated earlier, old logs are removed
considering log /var/log/dokku/apps/my-app-2025-10-31.log
Now: 2025-10-31 12:45
Last rotated at 2025-10-31 12:00
log does not need rotating (log has already been rotated)

Perfect! The configuration is valid. This single file handles rotation for all Dokku apps, not just this one.

Step 4: Final Verification

AI:

Excellent! Everything is configured. Let’s do a comprehensive check of the entire setup to confirm all components are working together.

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku logs:report my-app"
=====> my-app logs information
Logs vector sink: file://?path=/var/log/dokku/apps/my-app-%25Y-%25m-%25d.log&encoding[codec]=json
Logs vector global image: timberio/vector:0.49.0-debian
$ ssh root@YOUR_SERVER_IP "docker ps --filter name=vector --format 'table {{.Names}}\t{{.Status}}\t{{.Image}}'"
NAMES STATUS IMAGE
vector-vector-1 Up 6 minutes timberio/vector:0.49.0-debian
$ ssh root@YOUR_SERVER_IP "ls -lh /var/log/dokku/apps/ && echo '---' && wc -l /var/log/dokku/apps/*.log"
total 56K
-rw-r--r-- 1 root root 50K Oct 31 12:43 my-app-2025-10-31.log
---
42 /var/log/dokku/apps/my-app-2025-10-31.log

Everything is working! 50KB of logs, 42 lines captured. But here’s the best part—the standard dokku logs command still works:

Terminal window
$ ssh root@YOUR_SERVER_IP "dokku logs my-app --num 5"
2025-10-31T12:41:25.453958495Z app[web.1]: {"level":30,"time":1761914485346,"msg":"Initializing web server..."}
2025-10-31T12:41:25.454128437Z app[web.1]: {"level":30,"time":1761914485356,"msg":"Web server listening on port 5000"}
2025-10-31T12:41:43.589163001Z app[worker.1]: {"level":30,"time":1761914503563,"msg":"Processing new failed jobs",...}
2025-10-31T12:41:43.589217076Z app[worker.1]: {"level":30,"time":1761914503588,"msg":"Refund processing completed",...}

The standard dokku logs command still works perfectly! This is the beauty of Vector—it doesn’t interfere with normal operations. You get both:

  • Real-time logs via dokku logs
  • Persistent, searchable logs in files

AI:

Perfect! Let me create a final status summary showing everything is operational.

Terminal window
$ ssh root@YOUR_SERVER_IP "echo '=== VECTOR STATUS ===' && docker ps --filter name=vector --format 'Status: {{.Status}}' && echo '' && echo '=== LOG FILES ===' && ls -lh /var/log/dokku/apps/ && echo '' && echo '=== LOGROTATE CONFIG ===' && cat /etc/logrotate.d/dokku-apps"
=== VECTOR STATUS ===
Status: Up 8 minutes
=== LOG FILES ===
total 56K
-rw-r--r-- 1 root root 51K Oct 31 12:46 my-app-2025-10-31.log
=== LOGROTATE CONFIG ===
/var/log/dokku/apps/*.log {
daily
rotate 30
compress
delaycompress
notifempty
missingok
copytruncate
maxsize 100M
}

Implementation complete! Total time: about 10 minutes.

What Just Happened: The Full Picture

Let me break down what we accomplished in this 10-minute session:

Before:

  • ❌ Logs deleted on every deployment
  • ❌ No historical access
  • ❌ Only Docker’s ephemeral 10MB buffer
  • ❌ No way to debug past issues

After:

  • ✅ Logs persist across deployments
  • ✅ 30 days of searchable history
  • ✅ Automatic rotation prevents disk issues
  • ✅ Both real-time (dokku logs) AND historical access
  • ✅ Professional, production-ready solution
  • ✅ Works for all apps on the server

The Magic of This Approach:

  1. No Code Changes: My application code remained completely untouched. Vector handles everything at the infrastructure level.

  2. Self-Healing: The AI caught the URL encoding error immediately and corrected it without me having to Google anything.

  3. Built-in Solution: We used Dokku’s native capabilities instead of custom workarounds or volume mounting hacks.

  4. Scalable: This same configuration works for all my Dokku apps, not just this one. Future apps automatically get log persistence.

  5. Professional: Vector is an industry-standard log shipping tool (from Datadog, now Timber.io) used in production by major companies.

  6. Flexible Future: Later, I could easily ship logs to S3, CloudWatch, Datadog, or any monitoring platform—just by changing the sink configuration. No code changes needed.

The AI as a DevOps Partner

What impressed me most about this session:

Systematic Troubleshooting: When no logs appeared initially, the AI didn’t guess. It logically concluded we needed to trigger activity (restart the app) to generate logs.

Error Recovery: The URL encoding error was caught and fixed immediately. The AI explained why the fix worked (%25 = URL-encoded %).

Validation at Every Step: After each change, the AI requested verification commands to ensure things were working before moving forward.

Production Awareness: The solution included proper log rotation, monitoring commands, and verification that existing workflows (like dokku logs) still functioned normally.

This felt less like following a tutorial and more like pair programming with an experienced DevOps engineer.

My Reflection: More Than an Assistant, a Partner

This experience was more than just solving a technical problem. It was a glimpse into a new paradigm of software development and operations.

What struck me most wasn’t that the AI found a solution, but that it found a better one after I gave it new information. Its first plan (custom volumes and logrotate) was correct but generic. When I showed it the official Dokku documentation, it instantly pivoted to the platform’s native Vector integration—a cleaner, more scalable, and more professional approach.

This is the difference between a script and a consultant. The AI demonstrated:

  • Adaptability: It discarded its own good idea in favor of a great one based on new data.
  • Systematic Process: It followed a rigorous diagnose-propose-verify loop at every step.
  • Self-Correction: It caught and fixed its own errors (like the URL encoding) without my intervention.
  • Production Awareness: The solution included proper log rotation, monitoring commands, and verification that existing workflows still functioned normally.

The Future of DevOps

The future of DevOps isn’t about AI replacing engineers. It’s about augmenting them. The AI provides the speed, systematic analysis, and breadth of knowledge. The human provides the critical judgment, security oversight, and contextual wisdom.

Together, we solved a production problem in 10 minutes that could have taken me an hour of searching through forums and documentation.

The key insight: AI + Human oversight is more powerful than either alone. I acted as:

  • The secure gateway (executing commands safely)
  • The knowledge curator (providing documentation and context)
  • The decision validator (approving changes before they happened)

This is the new collaboration model, and it’s incredibly powerful. The future is here, and it’s a partnership.

Ready to Build with LLMs?

The concepts in this post are just the start. My free 11-page cheat sheet gives you copy-paste prompts and patterns to get reliable, structured output from any model.