p-fast trie, but smaller

2025-08-06 06:21 pm
[personal profile] fanf

https://dotat.at/@/2025-08-06-p-fast-trie.html

Previously, I wrote some sketchy ideas for what I call a p-fast trie, which is basically a wide fan-out variant of an x-fast trie. It allows you to find the longest matching prefix or nearest predecessor or successor of a query string in a set of names in O(log k) time, where k is the key length.

My initial sketch was more complicated and greedy for space than necessary, so here's a simplified revision.

Read more... )

Shark Off Of Halifax

2025-08-06 09:39 am
[personal profile] dewline posting in [community profile] common_nature
I don't live in Nova Scotia. The nearest big bodies of water to me are rivers, not oceans.

Still feeling awestruck at the sight of this. Apparently, some sharks do curiosity.



https://www.cbc.ca/news/canada/nova-scotia/close-encounter-with-great-white-shark-near-halifax-sparks-awe-disbelief-1.7600371
[personal profile] mjg59
There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM
[personal profile] fanf

https://dotat.at/@/2025-08-04-p-fast-trie.html

Here's a sketch of an idea that might or might not be a good idea. Dunno if it's similar to something already described in the literature -- if you know of something, please let me know via the links in the footer!

The gist is to throw away the tree and interior pointers from a qp-trie. Instead, the p-fast trie is stored using a hash map organized into stratified levels, where each level corresponds to a prefix of the key.

Exact-match lookups are normal O(1) hash map lookups. Predecessor / successor searches use binary chop on the length of the key. Where a qp-trie search is O(k), where k is the length of the key, a p-fast trie search is O(log k).

This smaller O(log k) bound is why I call it a "p-fast trie" by analogy with the x-fast trie, which has O(log log N) query time. (The "p" is for popcount.) I'm not sure if this asymptotic improvement is likely to be effective in practice; see my thoughts towards the end of this note.

Read more... )

[personal profile] solarbird

Greater Northshore Bike Connector Map 2.0 – 4 August 2025 – is now available on github, as is MEGAMAP 2.0.1.

Mostly small updates this time, but one in particular is very important, and another is pretty important if you’re in Shoreline:

  • ADDED: Alaskan Way Connector linking Elliot Bay Trail to Waterfront Trail with fully separated bikeways. Decades in the making, finally here (MEGAMAP only)
  • ADDED: Painted bike lanes on Meridian Ave N in Shoreline between 155th and 175th streets (both maps)
  • ADDED: “Commonly used” markers on Meridian Ave N throughout Shoreline (both maps) – this is somewhat aspirational, as there has been use of this road as a secondary to tertiary bike arterial but not quite enough to justify marking it as such until now. I am fairly certain that the new bike lanes in the middle of the route will increase its utility enough to justify it (both maps)
  • ADDED: “Commonly used” makers on a section of Fremont Ave in Shoreline, because that section is used a little more than parts of Meridian which now carry that marking, and one should be consistent (both maps)
  • ADDED: A weird little section of bike path I found in Lynnwood north of 196th at Wilcox Park. As 196th loses its sidewalks in that area, even this standalone oddness serves a useful purpose if you’re having to sidewalk-bike on 196th, say, to get to Gregg’s Cycles (MEGAMAP only)
  • ADDED: A few more street names in City of Seattle, along with a couple of small adjustments on difficult streets (both maps)
  • CORRECTION: REI Lynnwood’s icon was placed very slightly left of its actual location, and has been adjusted (MEGAMAP only)
Screen resolution preview of MEGAMAP 2.0.1 - 4 August 2025

All permalinks continue to work.

If you enjoy these maps and feel like throwing some change at the tip jar, here’s my patreon. Patreon supports get things like pre-sliced printables of the Greater Northshore, and also the completely-uncompressed MEGAMAP, not that the .jpg has much compression in it because honestly it doesn’t.

Enjoy biking!

Posted via Solarbird{y|z|yz}, Collected.

2025-08-02 04:11 pm
[personal profile] ambien_noisewall posting in [community profile] common_nature
there's a pond with lots of frogs at my job and on my breaks I walk the perimeter and every couple steps I hear a croak and a sploosh and see one swim away. not this guy though, he wasn't scared of me at all :)


Agate Beach

2025-08-02 01:45 pm
[personal profile] yourlibrarian posting in [community profile] common_nature


Our next travel stop was the Newport area and our hotel at Agate Beach. There was some fog the day we arrived but the next day dawned completely clear, giving us great views of the nearby lighthouse.

Read more... )

Follow Friday 8-1-25

2025-08-01 02:56 am
[personal profile] ysabetwordsmith posting in [community profile] followfriday
Got any Follow Friday-related posts to share this week? Comment here with the link(s).

Here's the plan: every Friday, let's recommend some people and/or communities to follow on Dreamwidth. That's it. No complicated rules, no "pass this on to 7.328 friends or your cat will die".

[personal profile] mjg59
LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)

[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

2025-07-30 10:18 pm
[personal profile] fox_in_me posting in [community profile] addme


Name: Mr. Fox

Age: 30-something


I mostly post about:
Stories from my life — my thoughts and feelings, especially during this time of war in Ukraine. I try to capture emotions honestly: memories of a peaceful past, reflections on the present, and tales from my life as a mariner and traveler.
This journal is still in its early days, after a long break from writing. Each entry is posted in both English and the original language. I also share my own photographs — from different times, chosen to reflect my current mood.

My hobbies are:
Photography (almost professional), lomography (daily photos of interesting moments), music (acoustic, alternative, instrumental covers), psychology, and classical literature. I love discovering new things — ideas, places, people.

My fandoms are:
Honestly, I’m not active in any specific fandom. But I enjoy reading and learning, especially to improve my English.

I'm looking to meet people who:
…feel connected to what I write — kindred spirits or simply those who find meaning in my words. I’m open to everyone (with one exception: I don’t welcome those who support or excuse the war). My posts are open and honest. I’d love to find new interesting people to read and connect with.

My posting schedule tends to be:
Currently daily, or a few times a week — depending on my free time.

When I add people, my dealbreakers are:
No major dealbreakers — most of what matters is already said above.

Before adding me, you should know:
I’m an open person without any particular agenda. I’m Ukrainian — and perhaps that matters now, just to avoid misunderstandings.
Welcome aboard. These are my messages in a bottle.

[personal profile] solarbird

Seattle Cider dropped a cosmic-crisp single variety cider, so I had a sample this weekend at their booth at the Lake Forest Park Farmer’s Market.

My expectations were pretty low, honestly, because the cosmic crisp is what I think of as an adequate apple, but no better. It functions as an apple, it fulfills the role of apple, the texture is reliable and the flavour is acceptable, but there are many which are better at being what it tries to be. Fuji and honeycrisp both come immediately to mind as similar but better cultivars, but which are not as durable in shipping.

(This isn’t to condemn the cosmic crisp; the last apple to occupy its particular market slot was the loathsome red delicious, a mealy, tasteless apple-shaped object which fulfilled the function of looking like an apple, but not that of being an apple. The cosmic crisp is far superior, something I will eat intentionally and – in the case of a better example – actually enjoy.)

So after saying more or less all the above to the Seattle Cider rep, and adding that making a cider from it seemed fairly unlikely, I gave it a try.

It’s the pilsner of the apple cider world – but it’s a pretty decent pilsner.

I don’t mean to say that it tastes like a pilsner; it doesn’t. I don’t even like beer, and pilsners are not exceptions. But I know some of the roles of different beers, and this cider lands right in the same spot. It’s light, but in defiance of my expectations, it’s not empty. It has a presence. It’s the sort of cider you’d actively enjoy in the shade during a very hot day, probably after you’ve been doing something athletic.

In that way, it reminds me a bit of Growers, made up in BC, which lands in roughly the same weight location.

Like the cosmic crisp isn’t a great apple, this isn’t a great cider. But that doesn’t mean it’s not a pleasant or enjoyable cider. Kind of like – and yet moreso than – the cosmic crisp, I think it’s a cider that has an actual role, one other than doing a good job at surviving shipping.

It’s supposed to be hot this week. If not this week, we’re heading into August.

I bought a bottle. We’ll see.

Posted via Solarbird{y|z|yz}, Collected.

A walk to Dothill

2025-07-26 12:35 pm
[personal profile] cmcmck posting in [community profile] common_nature
Dothill is on the moorland side of town and is an interesting combo of marshland, wetland and lakes.

This path takes you in once you walk through Donnerville Spinney to get there:



More pics! )

Starvation Falls

2025-07-25 05:36 pm
[personal profile] yourlibrarian posting in [community profile] common_nature


Our last waterfall of the trip, Starvation Falls. Smaller than the others with a little creek running down near the parking lot.

Read more... )

Follow Friday 7-25-25

2025-07-25 12:57 am
[personal profile] ysabetwordsmith posting in [community profile] followfriday
Got any Follow Friday-related posts to share this week? Comment here with the link(s).

Here's the plan: every Friday, let's recommend some people and/or communities to follow on Dreamwidth. That's it. No complicated rules, no "pass this on to 7.328 friends or your cat will die".

2025-07-24 08:23 pm
[personal profile] chocolatefrogs posting in [community profile] addme
Name: Amber

Age:

40's.

I mostly post about:

My Star Trek fanclubs and Ghostbusters fanclub, photos, paintings, drawings, fanfic, shows, binges, cosplay, real life, health issues, Fall, Halloween.

My hobbies are:

My Star Trek club, photography, drawing, painting rocks/hiding them, theme parks, cosplay, disneybounding, binging shows/movies.

My fandoms are:

Way to many to list but here goes: 9-1-1, 9-1-1 Lone Star, Star Trek, Star Wars, Shadowhunters, Supernatural (not much anymore), Harry Potter, The Lord of The Rings/The Hobbit, Jurassic Park/World (except new one), Psych, Doctor Who, Ghostbusters, Indiana Jones, Back to The Future, Scream, IT (old one), A Nightmare on Elm Street, Halloween, Horror, Disney, Marvel, PokemonGo (Is that considered fandom? lol).

I'm looking to meet people who:

Same interests as me.

My posting schedule tends to be: daily/weekly/monthly/sporadic/etc

Whenever I have something to post or say.

When I add people, my dealbreakers are:

politics (especially if that's all you post about I will not comment on them), homophobes, transphobes, church/God haters, Trump supporters, inactive accounts, bigotry, racists.

Before adding me, you should know: I'm a open heart patient with 8 surgeries and a pacemaker surgery to my name. So my posts are often about my health problems. I'm an introvert except around my club members and even then sometimes still. I'll delete if someone never comments on something or I don't feel we connected, nothing personal.

Question thread #143

2025-07-24 03:46 pm
[personal profile] pauamma posting in [site community profile] dw_dev
It's time for another question thread!

The rules:

- You may ask any dev-related question you have in a comment. (It doesn't even need to be about Dreamwidth, although if it involves a language/library/framework/database Dreamwidth doesn't use, you will probably get answers pointing that out and suggesting a better place to ask.)
- You may also answer any question, using the guidelines given in To Answer, Or Not To Answer and in this comment thread.

Volunteer social thread #156

2025-07-24 03:42 pm
[personal profile] pauamma posting in [site community profile] dw_volunteers
26 years ago today, I was putting out what turned out to be my last cigarette.

How's everyone else doing?