---
## Hosting a web site by yourself

Many steps and levels.



The first tiniest challenge
is to serve a web page
AT ALL.
That is, internally
on your own network,
just on your own machine.
 That page will only be
visible and accessible
on that single machine.
 It will be hosted on `localhost`,
the main name for the
IPv4 address 127.0.0.1 (*1) .

**One** of many ways we could do so is with python:

```
python -m http.server
or
python -m http.server 80
```

You can access it at ` http://127.0.0.1:80/`
with your web browser or CURL.


This would serve all
files in the current folder,
starting with a directory listing.
If the folder contains an `index.html` file, it would
even auto-open that.

_(Beware the above example would require `python` installed.)_

A **second** way would
be using javascript and nodeJS
to provide a web server.
A benefit here is,
that we probably will
end up with nodeJS anyway,
since it holds so much of modern
web javascript tooling.

A **third** ill-advised way
would be with powershell (on windows),
as it contains a crude built-in webserver.
Security-wise probably not a good idea .

A **fourth** way would be
one of the classic web server engines -
Apache or Nginx, which are dedicated web servers.

At this stage it doesn't
matter much which approach we take,
any would do.
Later as we heap on requirements and
restrictions, those will dictate
which will be feasible.

For now, we have reached probably
the simplest possible goal:
 'serving a web page, AT ALL'.


-------------

## Access from ` **elsewhere** `
---

Let us consider which things are lacking
in our current web-serving. Which needs
are most pressing?

  Possibly the worst thing is,
if our page is not
** accessible from other computers. **
This strangles the core idea of the web:
access from elsewhere, access from anywhere.

We may not actually be as broken on that front,
as we might initially assume. ` localhost` - `127.0.0.1`
is of course only locally, inwards accessible.
  But often when we assume we are running on `127.0.0.1`,
we are actually bound on `0.0.0.0` instead,
which addresses more than that:
It is a wildcard that binds on anything it can get its hands on.
In particular, also the NIC's own network address
on the local area network, e.g. `192.168.1.208`.

Thus we __are__ accessible, on e.g. this URL:
`http://192.168.1.208:80`
   (we might not be though, if the local machine firewall
blocks such access from the outside).
So at least other machines on our LAN may
very well access our site at `http://192.168.0.208:80` .

For now, our site is **not** accessible on the wider internet.
It won't be for some time, as this entails a number of things.
It requires us to deal with public IP addresses, routers
and port forwarding, and yet other things.

But looking at our site address itself, something else is obviously
lacking. It's just random numbers - a far cry from
addresses like `myawesomesite.com`.
Such readable names belong to DNS - "Domain Name Services",
which map IP number addresses to readable names,
in a shared common hierarchy.
  To get one of those **domain names**, we would normally _register_ a public
domain name, for which you pay a recurring fee.
For backwater addresses it won't cost much, but for
prime real estate names like `network.com`, you might pay arbitrarily much.
Apart from cost, the name must also be vacant;
you can't just steal somebody else's name.

However, for demo purposes we can fake a domain name address locally,
just by editing our local /etc/hosts file.
Or, even, a local dns server.
This will just allow your local browser to resolve and display
a funny dns name, instead of the raw IP address.
In any number of ways in which DNS names _actually matter_,
such a local hack will not be valid or work properly.
  If you want to try this out, just locate your local HOSTS file,
and add an entry like

```
127.0.0.1 myawesomesite.com
```

On Windows, your hosts file will be here.
You will need Administrator Access to edit it.


`C:\Windows\System32\`
`drivers\etc\hosts`


```
127.0.0.1 awesome
127.0.0.1 awesome.com
127.0.0.1 foo.awesome.com
127.0.0.1 bar.awesome.com

```


--------
## Secure Sites
---
By now, we can - sort of - access our site 'remotely'
(that is, from another *local* computer), and using a symbolic name,
with the DNS hack just described.

But there is yet another thing our site sorely lacks:
Its most basic security.
It is served through the original unencrypted HTTP protocol.
In 2025, that won't do -
straight HTTP is vulnerable to eavesdropping and spoofing in any number of ways.
It is a modern requirement and expectation,
that web sites are served through the **encrypted** HTTPS protocol.

That protocol relies on **server certificates**,
so we will have to handle certificates to do HTTPS.
Such certificates are tied to DNS domain names.
They guarantee

> _ 'I hold the certificate for the domain `acme.com`,
> and a third party vows for me'_


That is, a third independent trusted party has verified that acme.com has
the rights to a valid certificate.
  With his certificate in hand, `acme.com` can sign his communication,
  and second parties can verify he has done so.
  Because of this, `acme.com` must of course closely guard his certificate,
  to remain the sole possible valid signer.
  For this reason,
  it is also expected that certificates do not have too long life times.
  It is safer to trust a certificate that has been issued
  and existed only in recent times.

On our stair climb to achieve a full web site,
we can reach **public named addresses** on an earlier step,
before the step dealing with secure HTTPS/TLS certificates.
public named addresses only require a 'simple' DNS record
linking IP address and domain name.
Whereas HTTPS requires both the logistics of _obtaining_ valid trusted certificates,
as well as properly integrating their use in the acting web server.

Note that the web server technology itself is also involved in
obtaining and updating the certificate in the first place:
The authority issuing the certificate,
needs proof that you control your domain name.

They do so by asking "you" (your hardware)
to post privileged information through your domain on short notice,
and then immediately verify they can read back their expected secret value,
from your domain. Being satisfied in this,
they will issue your updated server certificate.

As all this is a hassle,
nowadays people often handle this with the integration tool `certbot`,
which updates this automatically and continuously,
with the organisation called **`Let's Encrypt`**.

From all this,
we learned that we eventually will be able
to access our numeric-address website through a registered domain name,
and a DNS record entry linking the two.
And that we eventually will be able to offer it all securely,
by provisioning a server certificate for HTTPS/TLS,
with `certbot` and `Let's Encrypt`.

This just about covers all we want
regarding the **front-facing part** of our website,
and how it will reach our users/victims.

-----------------

## Automating Setup, Updates and Config
---
Having dealt with how our website reaches the end user,
it is time to look at how we handle it on the 'inside':

**What** do we put in our website, **where** do we put it,
and how do we **manage** it all?

That topic is so big, those phrases don't even fully cover it.
If we for a moment restrict ourselves to simpler web sites,
we can keep the complexity of the discussion on a reasonable level.
  Thus, we will start by dealing with 'passive' websites,
  so-called **static web sites**.

They are characterized by having no or few moving parts;
they consist of a bunch of static texts and images.
In particular, they do not hold databases of dynamic state
and content that changes, there will (mostly)
be no users interacting dynamically,
and no stock, inventory, shipment or payment happening.



What would the lifecycle of operating such a system be like?

(A) In its most extreme reduction, it might be a single web page,
that never ever changes.

(B) To complicate it, we might imagine that single page is periodically updated.

(C) To complicate it further, there might be multiple pages,
new pages might emerge, and some older pages might be deleted again.

As we consider various stages of such complication,
they might scream for different levels of support.
As an example, consider (C):
Given that older pages might be deleted,
it would be dangerous to do updates with a simple folder copy:
If we copy a new folder on top of the old folder,
that might not get rid of the older pages still there.
What if one of those older pages said
_"Luxury Leather Boots Still Half Price"_ ?



If our situation is (A), it makes little sense to set up
 a complex Rube Goldberg machine to automatically manage the webserver.
If our situation is (C) or worse,
we might want something cleverly automated.

As an example, a scenario (D) might be,
that extra WEB SITES spawn and periodically are removed again.
Of course, scenario (D) could still be handled with a fully manual approach,
 same as we might use for (A).
  Indeed, if you travel back in time to the mid 1990's,
  this would often be exactly what was done.

But as time and lazyness advances,
we get hooked on being able to update anything from anywhere.
At the same time, security becomes an ever more dangerous aspect.
Allowing random login access with full privileges from anywhere,
to the production web server, is a security nightmare.
Any single breach or hole would compromise it all.
  Further, doing administration ad hoc
  is an easy shortcut to random problems
  ("Weird? It didn't do that last time I tried to update it ?").

Thus, we may be interested in mechanical and robust ways to "update just that part, in the proper way".


--------
## Source Code Pipeline
---
In our extreme scenario, we had just one web page file to serve.
Even then, we don't really want the single original source of that file to live directly on the production server.

So we are interested in ways to store-move-transfer the source files,
from elsewhere. Preferably with version control systems like git.
  We might have raw web server source files that can be served directly.
  But we might also instead have intermediate source files,
  which must be built, bundled or compiled
  to produce an appropriate artifact for serving.

Git is not the solution for this latter case of built artifacts.
You might store built artifacts in git, but it would be WRONG.
Some other ways might be to use container/docker technology
to transfer and serve build artifacts.
Or instead to just build directly locally on your server
as part of the update process
(proponents of containers would hate you for doing this :-)

Assuming you "just" use git, there are two ways to do it:
The 'raw' mode, where you serve directly
from the updated git working-directory.
Or the two-stage mode,
where you export the final serving folder from your WD.
IF you use the former,
you should be careful not to expose any git secrets or workings
 through the served content.
 E.g., you would not want to accidentally serve
  the contents of your .git folder.


---
## Dealing with Config
---

Until now, we have talked as though the site source files serve themselves.
In practice it is often not so
(..but it could be, with .e.g 'dotnet run' inside an aspnet project).
Instead, the source files are nested inside a ** web hosting application**,
like Apache or Nginx.
When doing so, something must glue together the hosting application,
and the site source files.

That is, we need some web site CONFIG,
and some way to update the hosting application with this site config.
  For Apache, this could be VirtualHost config folders.
Apart from updating those,
we also need some lifecycle integration with the hosting application:
It must be made aware of, that a given site has been added or updated,
and somehow be made to reload it.
This also involves what happens to
the previously deployed earlier version of the same site.
Is there a seamless transition between the two?

The hosting application must also be installed and configured.
For developers choosing the container approach,
this might be solved by building a container image
that contains both the hosting application (Apache) and the site files,
and deploying the result all as one unit.


------------------------

##  Acting it out.

For now, we should focus on the
external access with IP addresses,
and eventually DNS resolving.

HTTPS/TLS will come later,
and automated deploy pipeline
with source code integration
will also come later.
   The pipeline automation will
hopefully be gradual.
Because I hope to write code and scripts
to deal with the install and setup and config,
and we would of course like those
scripts to be maintained, built and updated
as part of the site source code.
Again, we should probably be careful
not to accidentally publicly serve
our pipeline script code,
which might accidentally reveal
how strangers could take over our automation.

What is our external IP address currently?
When you do a web search for 'what is my ip address',
you reach sites like
`whatismyipaddress.com`
which claim I have this address:
`94.126.2.51` .
It used to be 94.126.6.64

Currently, nothing appears to be responding there:

`curl -v 94.126.2.51`

What is my domain provider?

```
https://selvbetjening.punktum.dk
  /domæne
https://selvbetjening.punktum.dk
  /domæne/xok.dk
https://www.one.com
  /admin/account/products.do
```

.. So.. I found out my provider is `one.com` .
And that I had an outdated public static IP address listed.
I have now updated the IP address registered there.

Anyway, I must have several problems.
Because the Domain Record was just responsible for
that `xok.dk` was pointing at the wrong address.
That should not affect attempts to instead directly
contact the raw IP address (which at the moment of writing, _also_ fails).

So, I guess it is time to look at the router,
and at whether I have any servers powered on.

Let us see.. What is the gateway on the wireless network
I am currently using?

I can see from ipconfig, that my wireless gateway is
`http://192.168.0.1/` .
It is a D-Link router, `Dir-860L` .

.. I have now been around a bit, and done some (physical) cleanup.
I have logged into the main/outer router (the one tied to the static public IP address).
There, I learned it already forwards a lot of ports to my secondary router.
The main router is difficult to log into, but it doesn't matter much,
because across long spans of time, it keeps forwarding the same set of ports
to the inner router.
So, all the day-to-day changes and updates, will concern the inner router instead.

Anyway, I have plugged in the `yellow_bastard` machine again into mains power,
connected its network cable, and turned it on again..
  Also, I have ascertained or noted that its LAN IP address is `192.168.0.99` .

So, let's now see if that machine currently serves anything..
We know that it has not had its site sources updated in ages.
Also, I am a bit uncertain how I even manage google and github on it - whether
I have a Xn desktop UI configured at all..

I can see that it internally responds well and directly to

`curl http://192.168.0.99:80` .

And, of course, it does nothing on
`curl "https://192.168.0.99:443" `
so far - which matches that I never set that up.

```
http://192.168.0.1/status.php
http://192.168.0.1/st_device.php
http://192.168.0.1/bsc_lan.php
```

```
Host Name 	    IP Address 	      MAC Address 	          Expired Time
Samsung	        192.168.0.200	    bc:7e:8b:b6:6b:60	      6 Days 23 Hours 34 Minutes
T506K	          192.168.0.205	    8e:5f:18:bd:6a:a8	    	6 Days 22 Hours 21 Minutes
spidey	        192.168.0.207	    40:8d:5c:56:26:97	    	5 Days 11 Hours 27 Minutes
whitebeard	    192.168.0.203	    28:b2:bd:b2:af:20	    	6 Days 21 Hours 59 Minutes
DESKTOP-SIGOI5J	192.168.0.206	    24:4b:fe:05:ff:c0	    	6 Days 22 Hours 57 Minutes
silvermouse	    192.168.0.208	    18:5e:0f:8e:12:f0	    	6 Days 23 Hours 55 Minutes
localhost	      192.168.0.210	    9c:6b:00:56:e0:fb	    	6 Days 22 Hours 56 Minutes
```
---

## Learnings Post-Mortem (insecure external HTTP access with DNS activated again)

So, what did I learn today..?

I managed to get - insecure - external access up and running again.
This involved a number of parts:

 - my current public static IP address: Figuring out what that was (spoiler - it had changed).
 - finding my current DNS provider/retailer. Strangely, this was two parties: One who only cared about paid bills,
 but didn't let me update my domain records. And another party, who did not care about bills, but DID allow me to update my DNS records.
 Part of the confusion here is, that in between when I pay my regular bills, those companies
 may get bought and sold multiple times. So you have to figure out "who am I currently paying bills to?"
 - configuring my multiple routers to pass through traffic from outside, onto the right internal entity on my LAN.
 - actually TURNING ON my internal server, which had temporarily been shut down.

So, it was like multiple connected faucets all turned off, and all required to be turned on for it to work again.

Even with all that, two important parts are missing:
 - the activated protocol HTTP is the insecure one, because HTTPS will require many additional steps and configuration.
 - the proper updated site source is not present,
 precisely because a well-oiled site updater is a piece of work in itself, and doesn't happen by accident.
 My issue here was, that I had earlier published through bitbucket,
 but that my current updated site is situated on github.
   So, until I figure out a workable way to refresh
and update from github, the site source files are outdated.

Of the two, I will probably focus on this 'source pipeline' first,
since this is what allows me to iterate in a practical way,
as I work to solve it.

--------

## Thoughts the Day After

As mentioned, I managed to bring _something_
online, on to and accessible from the external network.
A public IP address, a public domain name record,
and a lot of banging on multiple routers, achieved this.

But two glaring things were not solved in all of this:
 - the serving protocol is still the insecure HTTP,
   not the secure HTTPS - which  will require going
   down the managing-server-certificates rabbit hole.
 - getting the proper version of the server files
 transferred and updated was a big and mostly unsolved problem.

 Of these two, getting a reliable, practical, easy, low-friction
 way to continously update and deploy the web site source,
 is my highest priority.
   The main reason being, that this will
enable and make it easy for me to iterate and improve
on my site's setup and behaviour.
That is, it is what will allow me to **build** it and work on it,
in a practical way.

I see about three or four levels of ease for updating
the site sources.

 - 1 manually writing and updating the site files directly on the server.
 - 2 a manual one-time transfer of the site files, with e.g. a USB stick.
 - 3 repeatedly _updating_ those site files with a command or script,
 executed **on the server**.
 - 4 _remotely_ updating the site files with some sort of automation or trigger.

 Right now, I am somewhere between 1 and 2.
 In particular, my "3" updater-script is pulling from the wrong source,
 and cannot easily be changed to do the right thing(?).


---

## Day 3, getting down to it

Time to get started on the very basic.
I wish to do a `git clone` from github,
on a commandline-only system.
This requires picking some sort of auth.
On commandline, I mainly have two options:
Either classic SSH authent, or HTTPS with PAT, Personal Access Token.
  Power that Be recommand I pick SSH.

For the SSH road, the advice/steps say.

ssh-keygen -t ed25519
which puts a key in
~/.ssh/id_ed25519.pub - my public identity to tell github about.

test:

`ssh -T git@github.com`

then
```
git clone git@github.com:user/repo.git
git clone git@github.com:pylgrym/firstapp.git
```

I will need to transfer the ssh key
(though I might hold it externally already?),
I can do so by very quickly downloading it
from my site.

----
## Debriefing & Post-Mortem

So, I got my act together for a bit,
and did some actual work and improvements.

### Onetime/Manual Git Pull

I set up a connection to the correct git repo.
From the mainly two choices for how to do this -
either personal access token (PAT) or ssh,
 I chose SSH.
 This involved using ssh-keygen to make
 me a public-private ssh key pair.
   I could register the public ssh key
with github, which apparently equals to allowing
'the guy with THAT key', to 'login in and use that github account'
(well, it is auTHENtication, but presumably
also some kind of AUTHORIZATION).
  I have no idea what scopes that ssh auth key entails;
presumably I should look into restricting that somewhat
(I only need 'git pull').
   I could test the access on CLI with
ssh -T git@github.com

Having such auth set up, I could
now finally do my
`git clone github.com/pylgrym/first_app`

and also do  `git pull` .

With this, I could make and update
my user folder (user, not root)  holding my wwwroot / htdocs folder.

My nginx uses a symlink to this.

### REPEATED Updater-Pull-Script

Having succeeded with running this manual 'one-time command',
I could also go to next step, by stuffing said command
into an sh script - site_clone.sh.

This script would combine `git clone` and `git pull` .
In particular, it would test for presence of earlier folder;
if no earlier folder, it would issue a `git clone` .
(which thus relies on my user SSH config in ` ~/.ssh/ `  ).

With this, I have accomplished next step - a
'finished' script, which I can (MANUALLY) re-run at will and at my leisure.


### _Automating_ the Repeated Update (cron), also with last_update marker

So, I could next address the 'manual' part,
and turn it into an automated part.

I did so, by installing a 1-minute frequency CRON job (user, not root),
which would run aforementioned site_clone.sh script OFTEN.
(I should look into running it every 5 minutes only,
or just check for changes before pulling;
maybe git pull almost does this).

I did something else too:
The updater script also updates a small file /last_updated.txt
stamping the time it last executed.
This way, I can see if the update process dies.

I might further make a 'watch' HTML page
which continously polls last_updated.txt, to _display/monitor_ that
the updater is running.


### Next Steps

Now, I again have choices of what to address next:
I _could_ go further down the path of auto-updating,
and set up some kind of remote trigger.
HOWEVER, with what I currently have, I have already
reached a significant plateau - that the site auto-updates!

Apart from elegance, a trigger update - to some extent -
would mainly 'complicate' (?) things. It would not bring
me something truly 'new', possibly apart from
getting hooked up to possible external monitoring
(ie, that something residing on the 'larger internet'
could now report whether my auto-site-deploy is healthy).
The other tiny thing is that it might bring me
timely faster delta-updates, instead of 60-second-windows.

But there are other things I could address instead:
Dynamic multi-sites on nginx, which I have earlier
looked into a bit (the config aspect of it).

Also, TLS/HTTPS/server certificates with LetsEncrypt might
be nice to get going.
---------------


## SSH Notes, for git
---
I added an SSH public key to my github account. it worked, and I was able to clone and pull a repo.
However, I was a bit surprised I was not dragged through some kind of wizard step to pick scopes for that access. Thus, I am unsure which scopes that auth allows. Is it basically all scopes, or is there some kind of default configured until I manually set a specific set of scopes, or how does that work? WHen I earlier did PAT/PAM, I did have to set scopes?


Aha, there was a good reason behind this:
The reason the PAT has the scope wizards,
is because it is basically BUILT on scopes;
scopes are the 'payload' of the PAT tokens.

  Instead for ssh-KEY, it is viewed as an identity,
and thus the 'guy with that identity' is allowed
to do what he otherwise would be allowed to..
So it is not viewed only as a 'key',
and thus cannot be a 'limited key';
it is specifically an IDENTITY key.
  There are some ways to proceed/solve it.
One would be somehow creating an actual extra github user,
and give this new restricted user a limited access to repo (e.g. read-only).

Another approach is so-called DEPLOY keys - on repos.
These are "repo-only", and specifically intended
for what I want here.
And, I should probably consider switching to these.
OTOH, on that 'darling server',
I probably actually DO want to be able to interact with github
in general.
(what did I mean by 'darling' ?)
---



## Let's Encrypt Then Shall We (acme)
---
I think next step should be LetsEncrypt.

Hmm, for LetsEncrypt, they say acme.sh is easier/lightweight
than certbot.


Install acme.sh

```
apk add curl socat git
curl https://get.acme.sh | sh
```


You can use multiple modes. Here's the most common:

Option A: Using `webroot` (if you have a web server like Nginx/Apache)

```
acme.sh --issue -d yourdomain.com -w /var/www/html
acme.sh --issue -d xok.dk -w /var/www/html
```

This creates /.well-known/acme-challenge/ in your webroot to prove ownership.

OR, B -
Using Standalone mode (no web server running)

```
acme.sh --issue --standalone -d yourdomain.com
```

This temporarily runs a server on port 80, so make sure port 80 is open and no other service is using it.

Third:
Option C: DNS challenge (best for wildcard or headless boxes)
If you can’t expose port 80 or want *.yourdomain.com, use DNS:

```
acme.sh --issue --dns dns_cf -d yourdomain.com -d '*.yourdomain.com'
```

This requires API keys for your DNS provider (e.g., Cloudflare, Route53).

----
3. Install the Certificate

```
acme.sh --install-cert -d yourdomain.com \
--key-file       /etc/ssl/private/yourdomain.key \
--fullchain-file /etc/ssl/certs/yourdomain.crt \
--reloadcmd     "service nginx reload"   # Optional: reload your service
```

This puts the certs in predictable locations and sets up automatic renewal.


---
4. (Optional) Hook into Services like nginx

Update your nginx config to use:

```
ssl_certificate     /etc/ssl/certs/yourdomain.crt;
ssl_certificate_key /etc/ssl/private/yourdomain.key;
```
----


5. Auto Renewal


acme.sh sets up a cron job automatically. You can verify:
```
crontab -l
```
----



---------

## CertBot alternative:


```
apk add certbot certbot-nginx

certbot certonly --standalone -d yourdomain.com
```

So, (probably?) easier to setup, but heavier?



Aha, LetsEncrypt does not appear to require
me to register anything with them remotely;
or rather, if so, it appears to happen automagically.

I am just prompted for an email address they can contact
in case of problems.

------------------------


## SSL LetsEncrypt Debriefing (now we have HTTPS)
---
So, I actually ended up managing to install
SSL/TLS/HTTPS with LetsEncrypt.
  Here is what was involved.
For starters, I went with ACME.SH,
which presumably is a 'lightweight' low-tech
approach to activating HTTPS/LetsEncrypt.
It is presumably shell-based,
as opposed to CertBot, which is python-based.

The entire thing appears to involve about 4-5 distinct steps.

The (almost) very first thing is to install ACME.SH itself.
It can be done with the dreaded 'curl xx | sh',
which I dared not quite do.
Instead, I curl'ed >intoALocalFile.sh,
which I 'inspected' and did `chmod u+x` on,
to run it (ie, so it could install).

I say 'almost', because there appeared to be a dependency
on `socat`, which I don't know what is, but which also
did not appear to be installed.
So, my step 0 was to apk install that socat thing.

All the acme.sh stuff works with chatty diagnostics.
During the install, it claimed my root account had no 'profile' mechanism,
and thus it told me I had to be in the acme.sh install folder
itself, whenever I later wanted to run it.
   I accepted this advice, and thus switched dir to
that path to do the subsequent steps.

The second step - after this installing-business -
is to register yourself. Again, you can ATTEMPT the 'third' step
(certificate ISSUING), but it will abort and instruct you
to run the accounter-register step first.
  Thus, this second account-register step.
And it is quite benign (?), all it asks is an email-address
to report back to, in case of trouble.

Which brings us finally to the third step,
the ISSUING (cert-issuing) step.
(it involves running the acme.sh tool script again,
and this time with the command 'issue').
  The issue step works off two pieces of input:
Your intended DNS domain (that is, the domain you claim to control,
like xok.dk), AND the path to the root of your 'wwwroot' - htdocs - html folder.

The ACTION of acme.sh-ISSUE is to put an unmistakable marker inside
the root of your website files, which it will then try to retrieve
externally on port 80/HTTP against your domain and your web site.

That is, you just claimed that you are the owner of 'your' domain,
and that the (unencrypted..) web site on that domain address
gets its files from the folder you also claimed..
  And acme.sh is VERIFYING your claim by putting a cookie
in the FOLDER you claimed, and then accessing the DOMAIN you also claimed,
and IF it concludes it can drop a secret in the first one
and SEE that secret appear on the second one,
then it is willing to confirm you and issue a CERTIFICATE for you.
Thus, the output of this ISSUE step (when OK),
is a **freshly issued and minted certificate** for you.
   Those are placed in 'expected places' in your `/etc/something/cert something`.

---
Now with certificate in hand,
we must do something with it..

The fourth step was a cert install:
`acme.sh install-cert`

This presumably places the cert stuff in 'expected places' below e.g. `/etc/` ..

I am not quite sure where the stuff is, BEFORE that step.

---


One further/fifth thing is to put them into e.g. nginx.
Inside the HTTPS/443 server block in your site .conf files,
you can specify a path to the .cert file,
and a path to the private key file.
The server will need both - the .cert to present to people,
and the private key in order to be able to sign and communicate as itself.

You will have to reload nginx/sites when you change such config.

As a by-product of the acme.sh install,
you have also got a cron job installed,
which will check for updates/renew certificate.

---


I am unsure about the relation between
install-cert,
and manually stuffing them into nginx.

I guess install-cert places them in relevant global places,
whereas the nginx update instead is an application of them,
where they are pulled from those global expected places.



---


Anyway, this means I now have HTTPS enabled,
and presumably an auto-renewing variant of this.
With a v1/basic vanilla flavor of this.

Possibly, I should then maybe turn off HTTP,
or place HTTP on an unexpected port.


So, now I have both basic https security,
and something that I can auto-update push to.

The main limitations are:
It is a single manual setup, and not very reproducible.

I should probably script and document the config.
If I do, those bits should not go in the public folder.
But that is probably easy to solve,
because the public folder already is a subfolder "htdocs",
so I could make a folder above/near that,
called site-setup.

The next other step,
apart from multiple nginx sub sites,
would be non-static sites,
e.g. a database,
and/or an api against such a database.

---
## FastAPI on YellowDevil - dynamic instead of static sites

So, I was "done" with static sites,
and wanted to have a go at an active api site.
In particular, I wanted to run a python server
with a SQLite database in it.

ASGI.

I had earlier learned, that FastAPI is a popular way to achieve this.
FAPI apparently adheres to ASGI standard/interface, whatever that is.
And I am told, it is popularly hosted inside things called unicorn,
e.g. vunicorn or gunicorn..?

So, I tried to figure out how to do all that, with alpine linux.

The first thing is to get python FastAPI installed.
This would normally entail python pip.
Or even python.

I first learned, that I had to install python itself - so, python3.
That is

`apk add python3` .

But this does not give me pip as such.
I tried to install pip, with e.g. py3-pip.
But got the message that the package did not exist.
The next thing I learned, was that py3-pip is only there,
if I add "community sources" for apk.
  So, I did this - and got my pip command out of that.

But when I attempted to USE the pip command, I encountered
** the reason it was not there in the first place: **

When you run it, you are rejected and told not to use it.
Or rather, not to use it to install system-wide stuff:

For system-wide stuff, you should instead continue to use `apk add` installer.

But.. not really. Because all those packages you want,
are not really AVAILABLE as apk packages..

So, you are supposed to do a third thing instead:

You are supposed to use venv's - VIRTUAL ENVIRONMENTS,
which is a folder-local-when-launched python project dependency set.

So, you get/make a local 'venv' subfolder (using the python `venv` module).
  With this, you get a local "space" to install stuff into.
You must ACTIVATE this venv, by running `yourenv/bin/activate` in your shell.
This helpfully modifies your displayed prompt to highlight that
you are now in a specific venv.
AND, once inside this venv, you are finally allowed to run PIP..!
Which will then of course install **into your local venv**.
You might naively assume, that your local venv now contains
a proper list of dependencies that you can depend on..
But, not so..
  Instead, you actually have to (manually) **manage** this list,
by extracting it into a requirements.txt file
(done with `python freeze`, I think?)

One reason for this is, that python apparently has several such
mechanisms: A lightweight 'project' variant, and a stronger heavier
variant used for building libraries (e.g. stuff similar to setuptools).

Anyway, it is that requirements.txt recipe which is
the real trick - based on that, you can regenerate your venv.
So, that file is the only thing you need to manage.
Sensibly, by default, it will include versions and names
of transitive dependencies, which is great to reach reproducibility.
But if you are lazy and angry, you
can experiment with just including the root dependencies,
and then assume that those locked versions may be expected to
bring in the same transitive deps at a later time.

But all of this means, that we can't just launch our
python code, we now have to init with venv before we launch
(by calling activate). (And of course before that, INSTALL the venv
from the requirements.txt, if we haven't yet).

Anyway, with the venv and requirements.txt and venv-activate,
we are finally allowed to install and use fastAPI and vunicorn.

Thus, I can do a manual one-off launch of our python api server.

This is only a third of the way.
We need at least two other things:

We need some sort of launcher-service to start and run our fastAPI/vunicorn service
after each boot.
Alpine doesn't do this out of the box,
so we also need to `apk add SUPERVISOR` .

And, we need to install a supervisor-service for our fastAPI.

Then, we need a third thing:
We must set up an nginx reverse proxy,
to map this service out of the local service.

This is probably not strictly needed.
It is mainly a tool to 'adjust and direct' the api service in a uniform way.
E.g. to wrap HTTPS on top of it.
  We need to figure out, whether we want to use CORS
to mix a static site with calls to python service on other ports.

Maybe we can use nginx to multiplex the two of them
into a single port, for the outside?

Anyway, all this crap is why docker is starting to look relevant;
to force all these ad-hoc settings into a common uniform framework.

----

sh, not bash, on alpine.

faster test for changes:

git status -uno --porcelain=v2 -b
git status -uno --porcelain=v2

----

-------------

Hmm, I need a symlink,
from where my nginx expects the wwwroot to be,
to where it actually is.
So, a link from folder A to folder B.

----
---
## Back to Hosting Options Again
---

what are my options for running or hosting or deploying my python service
on my linux box?
In order of increasing complexity, please describe my options.
The code for the python service is maintained in a git repo.
---

GPT's first/lowest suggestion is to MANUALLY
check it out, and MANUALLY launch it.
  That matches my own thoughts and expectation.

Its second suggestion, I was not fully aware of.
It mentions NOHUP, with honorable mentions of TMUX and SCREEN.
  And, I agree with GPT,
that the first main drawback of these approaches,
is that they don't restart up, if machine reboots.

Its 3rd suggestion, of course, is systemd
(or as you might have, some RC service instead).
This takes care of the 'restart' thing,
and also helps with stuff like logging/monitoring.

GPT claims, the main drawback of systemd is that it requires SUDO and 'a bit of setup'.

Next he recommends supervisor, which I had thought of
as a lightweight systemd alternative.
  Interestingly, supervisor is where I am at the moment.

It is not obvious to me how supervisor is worse/better than systemd.

Next thing, I am being recommended to use socalled virtualenv,
to isolate/insulate against dependencies.
I can see they are specifically talking about python virtualenv.

Next up, they recommend gunicorn or uvicorn.




| Method Used +([--])+   | Crash Start | Auto Boot | Isolate | Logs    | Complex |
| ---------------------- | ----------- | --------- | ------- | --------| --------|
| Manual                 | ❌          | ❌       | ❌      | ❌     | 🟢 Low  |
| nohup / screen         | ❌          | ❌       | ❌      |⚠️Basic | 🟢 Low  |
| systemd                | ✅          | ✅       | ❌      | ✅     | 🟡 Med  |
| supervisor             | ✅          | ✅       | ❌      | ✅     | 🟡 Med  |
| Virtualenv             | N/A          | N/A      | ✅      | N/A    | 🟡 Med  |
| gunicorn / uvicorn     | ✅          | ✅       | ✅      | ✅     | 🟡 Med  |
| Docker                 | ✅          | ✅       | ✅✅   | ✅     | 🔴 High |
| docker-compose         | ✅          | ✅       | ✅✅   | ✅     | 🔴 High |
| Git-based CI/CD        | ✅          | ✅       | ✅✅   | ✅     | 🔴 High |

NB: The table lacks automated pull/deploy/CI-CD column.
This means the auto CI/CD benefit is not highlighted.

What is `git-auto-deploy` for python?

--
supervisorctl restart myservice

| Command                         | Description                                     |
| ------------------------------- | ----------------------------------------------- |
| `supervisorctl status`          | List all managed services                       |
| `supervisorctl start myservice` | Start the service                               |
| `supervisorctl stop myservice`  | Stop the service                                |
| `supervisorctl reread`          | Reload configs (does **not** apply changes yet) |
| `supervisorctl update`          | Apply changed configs (after `reread`)          |
| `supervisorctl reload`          | Full restart of Supervisor (and all programs)   |

So I probably need reread and update.
But.. would that reload source code?
maybe it's still restart I need.
Yep, it's the service-restart stuff we need.




Todo: I must remember to set non-root users for my docker stuff.




## Footnotes
---
 (*1) (which happens to be the first valid address inside the loopback network).

Pass muster.

let's make an excellent tshirt design with the caption "GROWTH PRODUCT" in big bold letters. I'm thinking scifi, and a typeface akin to "eurostile bold extended".  The accompanying graphics should scream bright beautiful successful future for all of us, in harmony with nature and inner peace for all, sustainable, august and serene. positive thinking! Optimistic, can-do attitude! To infinity and beyond. No limits!
When I say "we", I of course mean that YOU should come up with this gorgeous image, and I should bask in the glory of the result you produce. Please! Believe in yourself, chatGPT - you can do it, I KNOW YOU CAN!

---
## A New Plan - SQLite, web api, fastApi, uvicorn, supervisor
---

I think I finally chose and found a plan for how to proceed, for now.
I had already - for what little it is worth - managed
to install a `supervisor` service with fastapi and python.

So, I have today built the following (or, at least, written the code for it).

A python web api service, which includes an SQLite db layer
built on SqlAcademy, and a simple web api built on fastApi.
Further, the web api ALSO hosts the worlds' tiniest front-end -
a single static html file with a bit of ajax javascript,
 to speak with said api service.
FastAPI being a WSGI system, can be hosted with one of the -corns;
e.g. uvicorn.
The reason we use uvicorn and fastAPI, is to get a robust web-server
to host the python.
If we instead used one of the 'tiny' http servers for python,
they are quite brittle and poorly performing. They will
strain when receiving multiple requests, and may crash in weird ways.

Thus, we have this small joined-paired front+backend in python,
and uvicorn to host it robustly.
  We then have supervisor as a lightweight service manager,
which will launch our fullstack thing on boot,
and possibly also help keep it running (systemd would?,
I am not sure supervisor really will?)

Also, the source code for all this will be continually pulled
on our little server.
  The next step would be a command or script
to do the 'supervisorctl restart' command - to trigger _manually_ at first.

Our next trick would then be fancy mechanisms to remotely-trigger this.

A suggested way to hook it up to github-actions is:

`git-auto-deploy`

That is a task in itself, it sounds convoluted
to get installed and set up on alpine? We will see..

What is `git-auto-deploy` for python?


Eventually, maybe the better thing to do,
would be simply SSH'ing to server. But that SHOULD be a larger attack surface.


---
## Execution - venv on windows woes
----

I ran into problems at the very start -
python virtualenvs would not work for me on windows.
The problem is that .py file extension is registered to
an absolute path to python.exe in the windows registry,
so the path shenanigans that virtualenv uses,
have no effect there.
  I am now for 117. time trying to figure out
how windows currently has fucked up file extension applications.
I should probably check from the command line.

THIS was the place:
```
Computer\HKEY_CLASSES_ROOT\Python.File\Shell\open\command
```
and I am now trying this:

```
"C:\python313\pythonn.bat" "%1" %*
"C:\bin\pythonn.bat" "%1" %*
```
And pythonn.bat looks like
```
python.exe %*
```

pip install uvicorn
pip freeze > reqs3.txt

http://127.0.0.1:8000/static/index0.html

```

```
https://github.com/pylgrym?tab=repositories


---

## angelapp Angular
I finally found it.
It was angelapp I was missing, and it is of course an angular app.

So now we run into the issue
of how to build-deploy an ANGULAR app.

Of course, without docker et al,
we do not want to build the _bundle_ on the dev client.
  But: When we don't,
we instead would require that dev tooling
on the SERVER, which introduces two problems:
 - now the production server would
 be involved in a second task _unrelated_
to its actual responsibility and purpose,
which was SERVING PAGES/CONTENT.
 - this (non-trivial) build step we
would now be running on the prod-server,
may FAIL, and will need DEPENDENCIES (which can be wrong, missing or break).

The crux of it is - if it is a build step
and 'non-final', it should probably not happen
on that server..
  But, we also don't want to pollute git with
(big/binary) build artifacts.

So, it boils down to, that we don't have a place to do, and to store, and to transfer, BUILDS.

Riding around in a mix of html, javascript and python,
have enabled us to run directly from interpreted sources.
The illusion is already shattering a bit,
because python and its libraries requires installs,
and dependencies with correct versions.
   DotNet/C# allows us a similar illusion,
but actually relies on the hosting server being
able to dynamically retrieve nuget packages.

We might do with less, if we were able to 'publish'
e.g. zip file archives to the hosting server.
  This could happen directly between
our dev-laptop and the hosting-server (think webdeploy).

So, by publishing from the dev-server,
instead of using git, we might run with 'less than docker'.

Remember that, I think, L0VE, used concatenation
of an .exe file and a .zip file, to have program at front,
and a zip of resources at the back.

scp or rsync can bring my .zip site to the hosting server.

```
#!/bin/bash
zip -r myapp.zip ./myapp
scp myapp.zip user@yourserver.com:/home/user/deployments/
ssh user@yourserver.com < < EOF
  unzip -o /home/user/deployments/myapp.zip -d /var/www/myapp-temp
  mv /var/www/myapp /var/www/myapp-backup
  mv /var/www/myapp-temp /var/www/myapp
  sudo systemctl restart myapp.service
  #!/bin/bash
  zip -r myapp.zip ./myapp
  scp myapp.zip user@yourserver.com:/home/user/deployments/
  ssh user@yourserver.com < < EOF
    unzip -o /home/user/deployments/myapp.zip -d /var/www/myapp-temp
    mv /var/www/myapp /var/www/myapp-backup
    mv /var/www/myapp-temp /var/www/myapp
    sudo systemctl restart myapp.service
  EOF
```

Maybe I should try to build angular anyway,
on server.

And, anyway, at least now I have a 'simple' python fastapi with raw html/js,
to start out with..




---
1. Install Docker

Alpine uses apk, so you can install Docker like this:

apk update
apk add docker
rc-update add docker boot
service docker start
----

Or maybe try  PODMAN instead?

Docker SAVE lets you build a .tar file with a docker image.
And Docker LOAD can pull in that .tar image file.

Alternatively, run a docker registry on the server.

You can also use docker CONTEXT to connect your laptop and the server directly.

Thought: My lack of familiarity with ssh,
makes me avoid workflows involving it.
on the other hand, this makes me investigate a lot of possibilities,
since I can't just 'manual brute force it' through ssh.

But again, SSH would make it easier for me to try out things
gradually, and allow me to incrementally iterate and improve solutions step-by-step
('can I automate this manual step a bit?').

But it is clear,
docker would very much allow me
to wrangle arbitrary silly modern front-end ruby goldbergs.

---
## what even is [git-auto-deploy], a Mirage..
---

Let's try to figure out, what this `git-auto-deploy` is..

This is a nicely written article
about YET ANOTHER TOTALLY DIFFERENT WAY OF DOING IT,
introducing 117 tools.
It might make sense to read, to get even more familiar
with this obnoxious territory:

https://medium.com/zerosum-dot-org/a-pure-git-deploy-workflow-with-jekyll-and-gitolite-b3a48f2ce06f

One of the first things I learn, is that MANY THINGS
squat on the same name:
This one here, is by Oliver Poignant (olipo186):
https://pypi.org/project/git-auto-deploy/
Apparently he had a lot of fun with it, from 2016-2017..
  It is a python/pip thing.

///

`https://github.com/pylgrym/first_app/settings/hooks`

So, I could definitely - and extremely easily (1 step) -
set up a **web hook** on github,
which will do a POST with json to
any URL I specified, whenever a git push event happens.
  Presumably, I will figure out
how to filter it for correct branch;
I don't want it to act on any branch.


https://pypi.org/project/git-auto-deploy/

?
Select the Post-Receive URL service hook
?

This one appears to be PHP based:

https://gist.github.com/nichtich/5290675

This one is scriptburn:

https://github.com/scriptburn/git-auto-deploy
Også php-based.


Thought - many people use PHP like I would use python.


https://medium.com/hookdoo/automatic-deployment-on-push-to-github-repository-74190c87eee4

Hookdoo - this appears to be a "huge" solution;
where above all I worry about security - is he harvesting my SSH keys..

https://portent.com/blog/design-dev/github-auto-deploy-setup-guide.htm

This is again a PHP plus webhook:

https://portent.com/blog/design-dev/github-auto-deploy-setup-guide.htm



For now, I simply added a /webhook1 POST entry to my `fasteddy` python WSGI service.
But I will probably need to modify nginx too,
so that python service becomes available on some domain name.

---
## Recap again, Supervisor, webhook
--

Friday, I finally got a lot of pieces
fitted together, working, and used in combination.

 - `supervisor` service manager: I finally
 appeared to get it working correctly;
 including a re-install of it.
 Even though I earlier got weird messages from it,
 and lobotimized some of its settings,
 I ended up resetting its configuration file,
 because otherwise it couldn't even correctly
 communicate with itself (ie, supervisorctl
 could not function to e.g. do its reload).
   For now, I disabled its 'autorestart' feature,
because I fear it might cause a zombie effect;
ie every time I tried to kill or stop a broken
uvicorn service, it would reappear (and still be broken).

One of the things I changed in this setup,
was to change the uvicorn launch to be a full shell script.
This would let me do things like
logging the time, properly source the venv,
and finally launch uvicorn.

One thing to watch out for there, is whether
the previous uvicorn process is properly killed.
If it is still running or hogging the IP port,
we cannot relaunch a new uvicorn process.

Another thing to note with supervisor,
is that I am not sure I have followed
proper practices for using a non-root user,
and thus right now I am ill-advisedly running as root, I think.

So, to recap what I have combined:

 - supervisor as service manager for the uvicorn-python service
 - nginx to reverse-proxy the localhost-uvicorn service
 onto a separate domain hostname (deploy.xok.dk).
 (maybe I should use a more obscure hostname?)
 - making a separate TLS certificate for deploy.xok.dk, with acme.sh
 (issue, install-cert, manual insert)
 - github webhook for branch push







I still lack one final piece for this:
The actual 'restart script'.
I have a handler in my uvicorn service,
which correctly recognizes when my 'deploy' branch is pushed.
But currently, no action-script is connected to it.

This     is another fixme: The branch is hardcoded, to 'deploy'.

Anyway, ironically right now, more-or-less,
it is **the service itself that I want to do auto-update for,**
that now can 'autoupdate itself'.

This is not fully true. In particular,
I will make a separate service
for the python-db-angular kit.
In particular, I will set up separate domain and TLS cert for them.
I am unsure whether I need to add separate DNS records for
other subdomains, or if * can handle it.
I suspect it can't, from what I have seen, but I can at least try it out..

Note that I could very well remap paths inside a single domain name.
But for now, I would like to separate them for experiments.
Also, I like that I can do separate files in the nginx folder.
I should probably version those nginx rules.

Next thing to do,
would be to set up the nginx / angular / python thing.

As for docker, I might try to build those directly on server!
But, if they fail, I only have the imperfect replication
on my own machines. But if docker delivers, that should be symmetric
and not a problem!


Anyways, for all of those git-auto-deploy "solutions",
I found out they all reduced to what I am doing myself anyway,
often with php. However, I should still look into what oliphant did
more precisely?
 I learned I have to manually determine which branch was hit;
github webhook just dumps a truckload on your head to sort out yourself.

https://github.com/olipo186/Git-Auto-Deploy

https://github.com/olipo186/Git-Auto-Deploy/blob/master/docs/Configuration.md


stroem maaler.
rosengaard keys.



klippes.
elmaaler.
how to set up.
--

matching uniform names.
icons.
classic.

https://deployer.xok.dk/
https://batadase.xok.dk/static/index0.html


Define SRVROOT "C:/Users/jakga/AppData/Roaming/Apache24"

So, bootstrap..
Chocolateley is the first thing we need,
installed now.
Then we can do
choco install sublimetext3
choco install greenshot
which allows us to take notes,
as we work.

Now, git and apache.
I cheated with GPT, to learn the command is
choco install apache-httpd -y
apache-httpd it is..
But I won't really like where it puts apache.

httpd.exe -k install
httpd.exe -k install -n "Apache"
net start Apache2.4
httpd.exe -k uninstall
httpd.exe -k uninstall -n "MyApacheService"
sc delete "YourServiceName"
sc query Apache2.4
sc query Apache

Sadly, sc query only finds a direct match :-/.
And it displays very little.


services.msc

Get-Service | Where-Object { $_.Name -like "*apache*" -or $_.DisplayName -like "*apache*" }
sc query type= service | findstr /I "apache"



Get-WmiObject -Class Win32_Service | Where-Object {
    $_.Name -like "*apache*" -or $_.DisplayName -like "*apache*"
} | Select-Object Name, DisplayName, State, StartMode, PathName, Description | Format-List



Name        : Apache
DisplayName : Apache
State       : Running
StartMode   : Auto
PathName    : "C:\Users\jakga\AppData\Roaming\Apache24\bin\httpd.exe" -k runservice
Description : Apache/2.4.55 (Win64) OpenSSL/1.1.1s




----

Oh, I also need
choco install git
to clone anything..




--- I looked closer at the scopes git-cred-manager suggests:
- these 3:
Create GISTS,
full control of private repos
update github action workflows.



Better favico, or those I have.

windows should not win over greenshot.

https://www.favicon.cc/?action=icon&file_id=659959

https://evilmartians.com/chronicles/how-to-favicon-in-2021-six-files-that-fit-most-needs
https://evilmartians.com/chronicles/how-to-favicon-in-2021-six-files-that-fit-most-needs

https://favicon.io/favicon-generator/

https://deployer.xok.dk/
https://batadase.xok.dk/static/index0.html



So, what do we have, so far.
We have the basic *.xok.dk https site up (*)
We have the 'deployer' site, with certificate.
we have the 'batadase' site, also with certificate.
```
https://xok.dk/
https://deployer.xok.dk/
https://batadase.xok.dk/static/index0.html
```
(*)
There is a small gotcha with the xok.dk/*.xok site:
The certificate is NOT a wildcard certificate,
even though we redirect *.xok to it.
Because of these, random-name accesses (random.xok.dk)
will trigger a certificate error/warning: That the Certificate
is only valid for xok.dk itself, not for those sub-domains.
I do not know the full details of what is possible here.
It is unsolved, and possibly I should instead remove the wildcard redirect.

But the two other sites appear to work.
They both use python and WSGI.
I do not yet use angular or build angular.

Possibly, I should also look into building angular with docker,
to learn more about the precise process (it will help me
build stuff that is NOT docker, too).

I ran into another issue while I was setting up:
Supervisor and fastapi/wsgi did not play along nicely;
they did not agree on how to restart, even if I specified reload.

Hmm, I can see in fullstuck,
that I am missing checkin/commit of the runner.
I am still contemplating and considering to harmonize and uniform-ize
the names of the code, modules, projects and sub-sites,
so I don't use multiple names for the same thing.
  There is an eternal conflict here, on mis-naming:
You can both mis-name things by christening them with a "bigger" name
that refers to a "too-large" thing that they are part of
(e.g., "naming the guy with his city's name").
Or, you can name them for one of their sub-componenents which
fails to declare the totality of them (E.g., "Johnny One-Arm").

An example:
The "deployer" is named for fastapi, which is just an implementation detail
of how it was built, which says nothing about what it **does**.
  'Deployer', though, is not really precise either; it is not
exactly a deploy-tool, it is more like 'a thing we do, that is somewhat related to deploy'.
  Internally, one of our services has the part named `api_webservice.py`,
which again is not fitting on another level.
It is the role it has _internally_ ('this is the part that handles the api webservice aspect'),
but in the grander scheme of things, that is just as unspecific as "service", "config", "role", "profile".

---
## Docky
---`
Anyway, let's install and play some docker..
`choco install docker`. That brings us a docker CLIENT,
and the error

``` docker: error during connect: this error may indicate that the docker daemon is not running .. ```

Let's install a docker ENGINE too;
for playing around, that one is called 'docker desktop' (550M, at least).
`choco install docker-desktop` .

So, I launched docker-desktop, and I logged in (the the 'sma..' account, not the 'jg' one).
HOWEVER, it still/also requires, that I ALSO install
wsl --update .
This is again a partial failure on microsoft's part:
They include a broken stub fake 'wsl', which instead just prints 'yeah, you should totally install wsl!'.
This causes DD to mistakenly conclude that 'wsl is already installed',
and therefore to suggest I 'update' it.
So, we should instead do this: `wsl --install` .

wsl required a reboot, but is now up to date.
Now, we can launch Docker Desktop once more.

deploying WSL2 distributions
ensuring main distro is deployed: deploying "docker-desktop": importing WSL distro "The operation could not be started because a required feature is not installed. \r\nError code: Wsl/Service/RegisterDistro/CreateVm/HCS/HCS_E_SERVICE_NOT_AVAILABLE\r\n" output="docker-desktop": importing distro: running WSL command wsl.exe C:\Windows\System32\wsl.exe --import docker-desktop \
--




`Host Compute Service`
was missing.

Based on random advice, I installed this,

```
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All -All
```

which triggered a mass of reboots and windows-update like
experiences.
I never got to the second part,
but now docker-desktop appears to start correctly.

So, I resume my command
`docker run i1` .

And now I get this better error:

```
> docker run i1
Unable to find image 'i1:latest' locally
docker: Error response from daemon: authentication required - email must be verified before using account

Run 'docker run --help' for more information
```

I went through a
'verify email address' flow with their web site,
which I wonder why didn't already work.

Now I get this instead:

C:\Apache24\htdocs\other\scratch>docker run i1
Unable to find image 'i1:latest' locally
docker: Error response from daemon: pull access denied for i1,
repository does not exist or may require 'docker login'

So let's try docker login.

```
> docker login
Authenticating with existing credentials... [Username: (myUserName) ]
i Info → To login with a different account, run 'docker logout' followed by 'docker login'
Login Succeeded
```

I still get the same error,
but at least, now I am logged in ...

```
npm install -g @angular/cli

http://localhost:4200/

ng build

docker run --rm -it my-app sh
ls -al /usr/share/nginx/html/
docker run --rm -it mytag sh

docker run -p 8081:80 --rm mytag

```



best possible optimal masterwork masterpiece background image
for my dual-booting linux/windows machine

what wifi
server kaffe.
lav tabel.udfyld tabel.

b650 ud ax-y1
flyt ting ned i kaelder reol.
nok ogsaa skaerm og hoejtalere?

b650 ud ax-y1
---


So, now I have a dualboot install.
I have windows 10 installed,
and ubuntu 24, but no wifi so far??
  I could put a dongle in it.
I could retrieve a dongle from tj-data.
I could visit rosengaard ikea,
for persian blinds harmonica thingy.

I could get a energy usage monitor with wifi.

after dark
be quiet fan.

79-80-86-87




-----------------------------------------------
Recap:
1, to serve a web page AT ALL (same computer)
2, served page accessible from SOME other computer (with IP address).
2, faking a DNS address with local etc HOSTS file.




# END OF STORY.