tva
← Insights

内部ツールから SaaS へ:銀行明細のダウンロードを自動化する

We built a script to automatically download our Wise bank statements. That is a one-afternoon task. The more interesting question — one that took considerably longer to answer — was whether the same tool could serve other businesses with the same problem, and what it would take to make that transition responsibly. The journey from a CLI script running on a developer laptop to a containerised service with email notifications exposed a set of decisions that are worth documenting, particularly for teams evaluating whether an internal automation is worth turning into a product.

元の問題

Wise provides an API for accessing account data, but the standard workflow for most finance teams is to log in to the web interface and manually download a statement CSV at the end of each month. For a single account with a single currency, this takes two minutes. For operations running multiple currencies across multiple Wise accounts — a common pattern for businesses with regional payment flows — it becomes a recurring overhead that someone has to remember, perform consistently, and file correctly.

The Wise API exposes the necessary endpoints. You can retrieve a list of profiles, get balances per currency, and request statement exports in CSV or PDF format for any date range. Authentication is via a personal access token that you generate in the Wise developer dashboard. The token has read-only access to account data when scoped correctly, which means it can be stored and used without the risk profile of a token that could initiate transfers.

Our first implementation was about 80 lines of Python. It read credentials from environment variables, iterated over profiles and currency balances, constructed statement requests for the previous month, and wrote the output files to a local directory. It ran manually. That was sufficient for our needs at the time.

スクリプトからスケジュールされたサービスへ

The step from “runs manually” to “runs automatically” introduces a different class of concerns. A script that fails silently when run manually is noticed immediately. A scheduled script that fails silently on the first of every month may go unnoticed for several months, leaving a gap in your statement archive that you discover during an audit.

We moved the script into a Docker container with a simple entrypoint and added a wrapper that sends an email notification on completion — success or failure. The email includes a summary of which accounts and currencies were processed, how many files were downloaded, and the file sizes. On failure it includes the error output. This is a minimal observability investment but it changes the operational contract: the automation is now supervised even though it runs unattended.

Scheduling is handled by GitHub Actions, not by cron inside the container. We use a schedule trigger on a workflow that runs on the first business day of each month. The workflow checks out the repository, builds the Docker image if it has changed, runs the container with secrets injected from GitHub Secrets, and uploads the output files as workflow artifacts. The artifacts are retained for 90 days, providing a retrievable archive independent of any local file storage.

The GitHub Actions approach has a practical advantage over a cron job on a server: it is visible. The workflow run history shows every execution, its duration, its exit code, and the artifacts it produced. A cron job on a server produces log output that is easy to miss and depends on the server being available. For a monthly task, having an auditable execution history in a version-controlled workflow definition is worth the small overhead of the workflow configuration.

「マルチテナント」が実際に必要とするもの

After the containerised version had been running reliably for several months, colleagues at other businesses asked whether they could use it. The question sounds simple: can other people run your tool? But in reality, “other people run your tool” covers a wide range of operational models with very different implications.

The simplest model: you share the repository and documentation, others run it themselves in their own GitHub Actions or Docker environment, and you provide no support. This is open-source distribution. It is not a SaaS product. The operational burden is entirely on the user.

The next step: you host the tool and users provide their own Wise API tokens via a configuration interface. Now you hold credentials. This immediately creates security and compliance obligations that are qualitatively different from running a personal automation. You must answer: how are credentials stored, who has access to them, what happens if the storage is compromised, and what is your liability if a user’s Wise account is accessed through your platform?

The step after that involves billing, support, service level commitments, and the operational overhead of running infrastructure for multiple users. None of these are insurmountable, but each requires a decision about how much engineering investment you are prepared to make and what recurring operational cost you are willing to carry.

SaaS の評価

We ran the evaluation by listing the requirements for a minimal multi-tenant version and costing each one honestly. Credential management alone — storing Wise API tokens securely, providing a UI to manage them, and implementing the access controls to prevent cross-tenant access — is a week of engineering work before you touch the actual statement download functionality. Add user authentication, billing, an email delivery service with unsubscribe handling, and monitoring for per-tenant job failures, and you have a product that requires sustained investment to maintain safely.

The market for this specific tool is also bounded. Businesses large enough to care about automating statement downloads are often large enough to have accounting software that integrates directly with Wise, or finance teams that have already solved the problem through their own means. The addressable market is the middle — businesses large enough to feel the pain but not yet using integrated accounting systems.

We concluded that the tool was not a good SaaS candidate for our team at the current stage. The revenue potential relative to the engineering and operational investment did not justify it. The open-source distribution model serves the actual need without creating obligations we cannot sustain.

維持したものと変更したもの

The internal tool is still running. We extended it over time to support PDF output in addition to CSV, to send statement files directly to an email address rather than relying on workflow artifact retrieval, and to support configurable date ranges so it can be triggered manually for ad-hoc statement requests without changing the scheduled monthly run.

The GitHub Actions workflow became the canonical deployment mechanism. We document it in the repository README with enough detail that someone who has not touched the tool in six months can understand what it does and how to operate it. This is a discipline worth applying to any internal automation: treat your internal tools as if you will need to hand them over to someone else, because eventually you will.

The most durable lesson from this project is about the gap between “this works for us” and “this works for others.” Internal tools are tuned to your specific constraints: your credential management practices, your file storage conventions, your email infrastructure. Making them generic enough for others to use is not just a matter of adding a configuration file. It requires abstracting away assumptions that you did not know were assumptions until someone else tried to use the tool and hit them.


関連インサイト

関連記事