Filtered by Web development

Reset

Programmatically render a NextJS page without a server in Node

September 6, 2022
0 comments Web development, Node, JavaScript

If you use getServerSideProps() in Next you can render a page by visiting it. E.g. GET http://localhost:3000/mypages/page1
Or if you use getStaticProps() with getStaticPaths(), you can use npm run build to generate the HTML file (e.g. .next/server/pages directory).
But what if you don't want to start a server. What if you have a particular page/URL in mind that you want to generate but without starting a server and sending an HTTP GET request to it? This blog post shows a way to do this with a plain Node script.

Here's a solution to programmatically render a page:


#!/usr/bin/env node

import http from "http";

import next from "next";

async function main(uris) {
  const nextApp = next({});
  const nextHandleRequest = nextApp.getRequestHandler();
  await nextApp.prepare();

  const htmls = Object.fromEntries(
    await Promise.all(
      uris.map((uri) => {
        try {
          // If it's a fully qualified URL, make it its pathname
          uri = new URL(uri).pathname;
        } catch {}
        return renderPage(nextHandleRequest, uri);
      })
    )
  );
  console.log(htmls);
}

async function renderPage(handler, url) {
  const req = new http.IncomingMessage(null);
  const res = new http.ServerResponse(req);
  req.method = "GET";
  req.url = url;
  req.path = url;
  req.cookies = {};
  req.headers = {};
  await handler(req, res);
  if (res.statusCode !== 200) {
    throw new Error(`${res.statusCode} on rendering ${req.url}`);
  }
  for (const { data } of res.outputData) {
    const [, body] = data.split("\r\n\r\n");
    if (body) return [url, body];
  }
  throw new Error("No output data has a body");
}

main(process.argv.slice(2)).catch((err) => {
  console.error(err);
  process.exit(1);
});

To demonstrate I created this sample repo: https://github.com/peterbe/programmatically-render-next-page

Note, that you need to run npm run build first so Next can have all the static assets ready.

In conclusion

The alternative, in automation, would be run something like this:


▶ npm run build && npm run start &
▶ sleep 5  # give the server a chance to start
▶ xh http://localhost:3000/aboutus
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
Date: Tue, 06 Sep 2022 12:23:42 GMT
Etag: "m8ff9sdduo1hk"
Keep-Alive: timeout=5
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-Powered-By: Next.js

<!DOCTYPE html><html><head><meta charSet="utf-8"/><meta name="viewport" content="width=device-width"/><title>About Us page</title><meta name="description" content="We do things. I hope."/><link rel="icon" href="/favicon.ico"/><meta name="next-head-count" content="5"/><link rel="preload" href="/_next/static/css/ab44ce7add5c3d11.css" as="style"/><link rel="stylesheet" href="/_next/static/css/ab44ce7add5c3d11.css" data-n-g=""/><link rel="preload" href="/_next/static/css/ae0e3e027412e072.css" as="style"/><link rel="stylesheet" href="/_next/static/css/ae0e3e027412e072.css" data-n-p=""/><noscript data-n-css=""></noscript><script defer="" nomodule="" src="/_next/static/chunks/polyfills-c67a75d1b6f99dc8.js"></script><script src="/_next/static/chunks/webpack-7ee66019f7f6d30f.js" defer=""></script><script src="/_next/static/chunks/framework-db825bd0b4ae01ef.js" defer=""></script><script src="/_next/static/chunks/main-3123a443c688934f.js" defer=""></script><script src="/_next/static/chunks/pages/_app-deb173bd80cbaa92.js" defer=""></script><script src="/_next/static/chunks/996-f1475101e84cf548.js" defer=""></script><script src="/_next/static/chunks/pages/aboutus-41b1f037d974ef60.js" defer=""></script><script src="/_next/static/REJUWXI26y-lp9JVmzJB5/_buildManifest.js" defer=""></script><script src="/_next/static/REJUWXI26y-lp9JVmzJB5/_ssgManifest.js" defer=""></script></head><body><div id="__next"><div class="Home_container__bCOhY"><main class="Home_main__nLjiQ"><h1 class="Home_title__T09hD">About Use page</h1><p class="Home_description__41Owk"><a href="/">Go to the <b>Home</b> page</a></p></main><footer class="Home_footer____T7K"><a href="/">Home page</a></footer></div></div><script id="__NEXT_DATA__" type="application/json">{"props":{"pageProps":{}},"page":"/aboutus","query":{},"buildId":"REJUWXI26y-lp9JVmzJB5","nextExport":true,"autoExport":true,"isFallback":false,"scriptLoader":[]}</script></body></html>

There are probably many great ideas that this can be used for. At work we use getServerSideProps() and we have too many pages to build them all statically. We need a solution like this to do custom analysis of the rendered HTML to check for broken links by analyzing every generated <a href> tag.

Introducing docsQL

March 28, 2022
0 comments Web development, GitHub, JavaScript

tl;dr; docsQL is a web app for analyzing lots of Markdown content files with SQL queries.

Demo

Sample instance based on MDN's open source content.

Screenshots

Background/Introduction

When I worked on the code for MDN in 2019-2021 I often found that I needed to understand the content better to debug or test or just find a sample page that uses some feature. I ended up writing a lot of one-off Python scripts that would traverse the repository files just to do some quick lookup that was too complex for grep. Eventually, I built a prototype called "Traits DB" which was powered by an in-browser SQL engine called alasql. Then in 2021, I joined GitHub to work on GitHub Docs and here there are lots of Markdown files too that trigger different features based on various front-matter keys.

docsQL does two things:

  1. Analyze lots of .md files into a docs.json file which can be queried
  2. A static single-page-app for executing SQL against this database file

Plugins

The analyzing portion has a killer feature in that you can write your own plugins tailored specifically to your project. Your project might use some quirks that are unique. In GitHub Docs, for example, we use something called "LiquidJS" which is like a pre-Markdown processing to do things like versioning. So I can write a custom JavaScript plugin that extends data you get from reading in the front-matter.

Here's an example plugin:


const regex = /💩/g;
export default function countCocoIceMentions({ data, content }) {
  const inTitle = (data.title.match(regex) || []).length;
  const inBody = (content.match(regex) || []).length;
  return {
    chocolateIcecreamMentions: inTitle + inBody,
  };
}

Now, if you add that to your project, you'll be able to run:


SELECT title, chocolateIcecreamMentions FROM ? 
WHERE chocolateIcecreamMentions > 0 
ORDER BY 2 DESC LIMIT 15

How you're supposed to use it

It's up to you. One important fact to keep in mind is that not everyone speaks SQL fluently. And even if you're somewhat confident with SQL, it might not be obvious how this particular engine works or what the fields are. (Mind you, there's a "Help" which shows you all fields and a collection of sample queries).
But it's really intuitive to extend an already written SQL query. So if someone shares their query, it's easy to just extend it. For example, your colleague might share a URL with an SQL query in the query string, but you want to change the sort order so you just edit DESC for ASC.

I would recommend that any team that has a project with a bunch of Markdown files, add docsql as a dependency somewhere, have it build with your directory of Markdown files, and then publish the docsql/out/ directory as a static web page which you can host on Netlify or GitHub Pages.
This way, your team gets a centralized place where team members can share URLs with each other that has queries in it. When someone shares one of these, they get added to your "Saved queries" and you can extend them from there to add to your own list.

Behind the scenes

The project is here: github.com/peterbe/docsql and it's MIT licensed. The analyzing part is all Node. It's a CLI that is able to dynamically import other .mjs files based on scanning the directory at runtime.

The front-end is a NextJS static build which uses Mantine for the React UI components.

You can install it npx like this:


npx docsql /path/to/my/markdown/files

But if you want to control it a bit better you can simply add it to your own Node project with: npm save docsql or yarn add docsql.

Let's make it better

First of all, it's a very new project. My initial goal was to get the basics working. A lot of edges have been left rough. Especially in areas of installation, performance, and SQL editor. Please come and help out if you see something. In particular, if you tried to set it up but found it hard, we can work together to either improve the documentation to fix some scripts that would help the next person.

For feature requests and bug reports use: https://github.com/peterbe/docsql/issues/new
Or just comment here on the blog post.

From photo of ingredients, to your shopping list

September 10, 2021
0 comments Web development, That's Groce!, Firebase

Today I launched a really cool new feature to That's Groce!: Ability to upload photos of ingredients and have the food words automatically suggested to be added to your shopping list.

It's best explained with this 26-second video.

The general idea

Food words found
The idea is that you know you're going to cook that Vegetarian Curry Lasagna on page 123 in Jamie's Summer Cookbook. Either you read the ingredients and type in each ingredient you're going to need to buy (because you know what's in your pantry and fridge) or just take a photo of the whole ingredient listing. Now, when you're in the store and wonder: "I remember I need red peppers, but how many was it again?!".
But not only that, once you've taken a photo of the list of ingredients, it can help you populate your shopping list.

How it works in That's Groce! is that your photo is turned into a block of text, and from that text certain "food words" are extracted and all else is ignored. The food words are based on a database of 3,600+ English food words that I've gathered and also manually curated. There are lots of caveats. The 3,600+ food words are not perfect and there are surprisingly many combinations and plural vs. singular that can make the list incomplete. But, that's where you can help! If you find that there's a word it correctly scanned but didn't suggest, you can type in your own suggestions for everyone to benefit from. If you type in something it didn't manage to spot, I'll review that and add it to the global database.

Food-word recognition

The word recognition is done using Google Cloud Vision API which is a powerful machine-learning-based service from Google Cloud that is stunningly accurate. Tidy camera photos of cookbooks with good light is nearly perfect, but it can also do an impressive job with photos of handwritten recipes, like this:

Handwritten recipe photo

One thing you will find is that it's often hard to only take a photo of the actual list of ingredients. Often, a chunk of the cooking instructions or the recipe "back story" gets into the photo frame. These words aren't actually ingredients and can lead to surprising food word suggestions that you definitely don't need to buy. It's not perfect but at least you'll have a visual memory of what you're cooking as you're standing there in the grocery store.

Options

Understandably, it's nearly impossible for the app to know what you have in your pantry/fridge/freezer/spice rack. But a lot of recipes spell out exactly everything you need, not for buying, but for cooking. E.g. 1 teaspoon salt. But salt (or pepper or sugar or butter) is the kind of foodstuff you probably always have at home. So then it doesn't make sense to suggest that you put "salt" on the shopping list. To solve for that, you can override your own preferred options of special keywords. For example, I've added "salt", "pepper", "sugar, "table salt", "black pepper" as words that can always be ignored. You can also train the system to set up aliases. For example, a lot of recipes call for "lemon zest" but what you actually purchase is a lemon and then grate it yourself on the grater. So you can add special keywords that act as aliases for other words. Here's one example:

Options

Help out!

Foodwords database sample
Over the last couple of days, I've been snapping lots of photos from lots of different cookbooks, and every time I begin to think the database is getting good, I stumble on some new food word. For example, I recently tested a photo of a recipe that called for "Jackfruit". What even is that?! Anyway, it is inevitable that certain words are missing. But that's where we can help each other. If you test it out and you notice that it correctly scanned the text but the word wasn't suggested, click the "Suggest" button and type in your suggestions. Together we can, one food word at a time, eradicate misses.

There's still some work to be done to make the database even stronger by setting up clever aliases for everyone. For example, a lot of recipes call for "(some number) cloves garlic" but it can easily get confused for the spice "cloves" and the root vegetable "garlic". So, perhaps we can train it to recognize "cloves garlic" to actually just mean "garlic".

Also, the database is currently only in (primarily American) English. The platform would support other languages but I would definitely need a hand to seed it with more words in other languages.

Try it out

If you haven't set up an account yet (it's still free!), to test it, you can go to https://thatsgroce.web.app, click "Get started without signing in", go into your newly created shopping list, and press the "Photos" button. Try uploading some snapped photos from your cookbooks. Please please let me know what you think!

How I upload Firebase images optimized

September 2, 2021
0 comments JavaScript, Web development, Firebase

I have an app that allows you to upload images. The images are stored using Firebase Storage. Then, once uploaded I have a Firebase Cloud Function that can turn that into a thumbnail. The problem with this is that it takes a long time to wake up the cloud function, the first time, and generating that thumbnail. Not to mention the download of the thumbnail payload for the client. It's not unrealistic that the whole thumbnail generation plus download can take multiple (single digit) seconds. But you don't want to have the user sit and wait that long. My solution is to display the uploaded file in a <img> tag using URL.createObjectURL().

The following code is most pseudo-code but should look familiar if you're used to how Firebase and React/Preact works. Here's the FileUpload component:


interface Props {
  onUploaded: ({ file, filePath }: { file: File; filePath: string }) => void;
  onSaved?: () => void;
}

function FileUpload({
  onSaved,
  onUploaded,
}: Props) => {
  const [file, setFile] = useState<File | null>(null);

  // ...some other state stuff omitted for example.

  useEffect(() => {
    if (file) {
      const metadata = {
        contentType: file.type,
    };

    const filePath = getImageFullPath(prefix, item ? item.id : list.id, file);
    const storageRef = storage.ref();

    uploadTask = storageRef.child(filePath).put(file, metadata);
    uploadTask.on(
      "state_changed",
      (snapshot) => {
        // ...set progress percentage
      },
      (error) => {
        setUploadError(error);
      },
      () => {
        onUploaded({ file, filePath });  // THE IMPORTANT BIT!

        db.collection("pictures")
          .add({ filePath })
          .then(() => { onSaved() })

      }
    }
  }, [file])

  return (
      <input
        type="file"
        accept="image/jpeg, image/png"
        onInput={(event) => {
          if (event.target.files) {
            const file = event.target.files[0];
            validateFile(file);
            setFile(file);
          }
        }}
      />
  );
}

The important "trick" is that we call back after the storage is complete by sending the filePath and the file back to whatever component triggered this component. Now, you can know, in the parent component, that there's going to soon be an image reference with a file path (filePath) that refers to that File object.

Here's a rough version of how I use this <FileUpload> component:


function Images() {

  const [uploadedFiles, setUploadedFiles] = useState<Map<string, File>>(
    new Map()
  );

  return (<div>  
    <FileUpload
      onUploaded={({ file, filePath }: { file: File; filePath: string }) => {
        const newMap: Map<string, File> = new Map(uploadedFiles);
        newMap.set(filePath, file);
        setUploadedFiles(newMap);
      }}
      />

    <ListUploadedPictures uploadedFiles={uploadedFiles}/>
    </div>
  );
}

function ListUploadedPictures({ uploadedFiles}: {uploadedFiles: Map<string, File>}) {

  // Imagine some Firebase Firestore subscriber here
  // that watches for uploaded pictures. 
  return <div>
    {pictures.map(picture => (
      <Picture picture={picture} uploadedFiles={uploadedFiles} />
    ))}
  </div>
}

function Picture({ 
  uploadedFiles,
  picture,
}: {
  uploadedFiles: Map<string, File>;
  picture: {
    filePath: string;
  }
}) {
  const thumbnailURL = getThumbnailURL(filePath, 500);
  const [loaded, setLoaded] = useState(false);

  useEffect(() => {
    const preloadImg = new Image();
    preloadImg.src = thumbnailURL;

    const callback = () => {
      if (mounted) {
        setLoaded(true);
      }
    };
    if (preloadImg.decode) {
      preloadImg.decode().then(callback, callback);
    } else {
      preloadImg.onload = callback;
    }

    return () => {
      mounted = false;
    };
  }, [thumbnailURL]);

  return <img
    style={{
      width: 500,
      height: 500,
      "object-fit": "cover",
    }}
    src={
      loaded
        ? thumbnailURL
        : file
        ? URL.createObjectURL(file)
        : PLACEHOLDER_IMAGE
    }
  />
}

Phew! That was a lot of code. Sorry about that. But still, this is just a summary of the real application code.

The point is that; I send the File object back to the parent component immediately after having uploaded it to Firebase Cloud Storage. Then, having access to that as a File object, I can use that as the thumbnail while I wait for the real thumbnail to come in. Now, it doesn't matter that it takes 1-2 seconds to wake up the cloud function and 1-2 seconds to perform the thumbnail creation, and then 0.1-2 seconds to download the thumbnail. All the while this is happening you're looking at the File object that was uploaded. Visually, the user doesn't even notice the difference. If you refresh the page, that temporary in-memory uploadedFiles (Map instance) is empty so you're now relying on the loading of the thumbnail which should hopefully, at this point, be stored in the browser's native HTTP cache.

The other important part of the trick is that we're using const preloadImg = new Image() for loading the thumbnail. And by relying on preloadImage.decode ? preloadImage.decode().then(...) : preload.onload = ... we can be informed only when the thumbnail has been successfully created and successfully downloaded to make the swap.

10 years a Mozillian, always a Mozillian

August 30, 2021
2 comments Web development, Mozilla, MDN

As of September 2021, I am leaving Mozilla after 10 years. It hasn't been perfect but it's been a wonderful time with fond memories and an amazing career rocket ship.

In April 2011, I joined as a web developer to work on internal web applications that support the Firefox development engineering. In rough order, I worked on...

  • Elmo: The web application for managing the state of Firefox localization
  • Socorro: When Firefox crashes and asks to send a crash dump, this is the storage plus website for analyzing that
  • Peekaboo: When people come to visit a Mozilla office, they sign in on a tablet at the reception desk
  • Balrog: For managing what versions are available for Firefox products to query when it's time to self-upgrade
  • Air Mozilla: For watching live streams and video archive of all recordings within the company
  • MozTrap: When QA engineers need to track what, and the results, of QA testing Firefox products
  • Symbol Server: Where all C++ debug symbols are stored from the build pipeline to be used to source-map crash stack traces
  • Buildhub: To get a complete database of all and every individual build shipped of Firefox products
  • Remote Settings: Managing experiments and for Firefox to "phone home" for smaller updates/experiments between releases
  • MDN Web Docs: Where web developers go to look up all the latest and most detailed details about web APIs

This is an incomplete list because at Mozilla you get to help each other and I shipped a lot of smaller projects too, such as Contribute.json, Whatsdeployed, GitHub PR Triage, Bugzilla GitHub Bug Linker.

Reflecting back, the highlight of any project is when you get to meet or interact with the people you help. Few things are as rewarding as when someone you don't know, in person, finds out what you do and they say: "Are you Peter?! The one who built XYZ? I love that stuff! We use it all the time now in my team. Thank you!" It's not a brag because oftentimes what you build for fellow humans it isn't engineering'ly brilliant in any way. It's just something that someone needed. Perhaps the lesson learned is the importance of not celebrating what you've built but just put you into the same room as who uses what you built. And, in fact, if what you've built for someone else isn't particularly loved, by meeting and fully interactive with the people who use "your stuff" gives you the best of feedback and who doesn't love constructive criticism so you can become empowered to build better stuff.

Mozilla is a great company. There is no doubt in my mind. We ship high-quality products and we do it with pride. There have definitely been some rough patches over the years but that happens and you just have to carry on and try to focus on delivering value. Firefox Nightly will continue to be my default browser and I'll happily click any Google search ads to help every now and then. THANK YOU everyone I've ever worked with at Mozilla! You are a wonderful bunch of people!

How to get all of MDN Web Docs running locally

June 9, 2021
1 comment Web development, MDN

tl;dr; git clone https://github.com/mdn/content.git && cd content && yarn install && yarn start && open http://localhost:5000/ will get you all of MDN Web Docs running on your laptop.

The MDN Web Docs is built from a git repository: github.com/mdn/content. It contains all you need to get all the content running locally. Including search. Embedded inside that repository is a package.json which helps you start a Yari server. Aka. the preview server. It's a static build of the github.com/mdn/yari project which handles client-side rendering, search, an just-in-time server-side rendering server.

Basics

All you need is the following:

▶ git clone https://github.com/mdn/content.git
▶ cd content
▶ yarn install
▶ yarn start

And now open http://localhost:5000 in your browser.

This will now run in "preview server" mode. It's meant for contributors (and core writers) to use when they're working on a git branch. Because of that, you'll see a "Writer's homepage" at the root URL. And when viewing each document, you get buttons about "flaws" and stuff. Looks like this:

Preview server

Alternative ways to download

If you don't want to use git clone you can download the ZIP file. For example:

▶ wget https://github.com/mdn/content/archive/refs/heads/main.zip
▶ unzip main.zip
▶ cd content-main
▶ yarn install
▶ yarn start

At the time of writing, the downloaded Zip file is 86MB and unzipped the directory is 278MB on disk.

When you use git clone, by default it will download all the git history. That can actually be useful. This way, when rendering each document, it can figure out from the git logs when each individual document was last modified. For example:

"Last modified"

If you don't care about the "Last modified" date, you can do a "shallow git clone" instead. Replace the above-mentioned first command with:

▶ git clone --depth 1 https://github.com/mdn/content.git

At the time of writing the shallow cloned content folder becomes 234MB instead of (the deep clone) 302MB.

Just the raw rendered data

Every MDN Web Docs page has an index.json equivalent. Take any MDN page and add /index.json to the URL. For example /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice/index.json

Essentially, this is the intermediate state that's used for server-side rendering the page. A glorified way of sandwiching the content in a header, a footer, and a sidebar to the side. These URLs work on localhost:5000 too. Try http://localhost:5000/en-US/docs/Web/API/Fetch_API/Using_Fetch/index.json for example.

The content for that index.json is built just in time. It also contains a bunch of extra metadata about "flaws"; a system used to highlight things that should be fixed that is somewhat easy to automate. So, it doesn't contain things like spelling mistakes or code snippets that are actually invalid.

But suppose you want all that raw (rendered) data, without any of the flaw detections, you can run this command:

▶ BUILD_FLAW_LEVELS="*:ignore" yarn build

It'll take a while (because it produces an index.html file too). But now you have all the index.json files for everything in the newly created ./build/ directory. It should have created a lot of files:

▶ find build -name index.json | wc -l
   11649

If you just want a subtree of files you could have run it like this instead:

▶ BUILD_FOLDERSEARCH=web/javascript BUILD_FLAW_LEVELS="*:ignore" yarn build

Programmatic API access

The programmatic APIs are all about finding the source files. But you can use the sources to turn that into the built files you might need. Or just to get a list of URLs. To get started, create a file called find-files.js in the root:


const { Document } = require("@mdn/yari/content");

console.log(Document.findAll().count);

Now, run it like this:

▶ export CONTENT_ROOT=files

▶ node find-files.js
11649

Other things you can do with that findAll function:


const { Document } = require("@mdn/yari/content");

const found = Document.findAll({
  folderSearch: "web/javascript/reference/statements/f",
});
for (const document of found.iter()) {
  console.log(document.url);
}

Or, suppose you want to actually build each of these that you find:


const { Document } = require("@mdn/yari/content");
const { buildDocument } = require("@mdn/yari/build");

const found = Document.findAll({
  folderSearch: "web/javascript/reference/statements/f",
});

Promise.all([...found.iter()].map((document) => buildDocument(document))).then(
  (built) => {
    for (const { doc } of built) {
      console.log(doc.title.padEnd(20), doc.popularity);
    }
  }
);

That'll output something like this:

▶ node find-files.js
for                  0.0143
for await...of       0.0129
for...in             0.0748
for...of             0.0531
function declaration 0.0088
function*            0.0122

All the HTML content in production-grade mode

In the most basic form, it will start the "preview server" which is tailored towards building just in time and has all those buttons at the top for writers/contributors. If you want the more "production-grade" version, you can't use the copy of @mdn/yari that is "included" in the mdn/content repo. To do this, you need to git clone mdn/yari and install that. Hang on, this is about to get a bit more advanced:

▶ git clone https://github.com/mdn/yari.git
▶ cd yari
▶ yarn install
▶ yarn build:client
▶ yarn build:ssr
▶ CONTENT_ROOT=../files REACT_APP_DISABLE_AUTH=true BUILD_FLAW_LEVELS="*:ignore" yarn build
▶ CONTENT_ROOT=../files node server/static.js

Now, if you go to something like http://localhost:5000/en-US/docs/Web/Guide/ you'll get the same thing as you get on https://developer.mozilla.org but all on your laptop. Should be pretty snappy.

Is it really entirely offline?

No, it leaks a little. For example, there are interactive examples that uses an iframe that's hardcoded to https://interactive-examples.mdn.mozilla.net/.

There are also external images for example. You might get a live sample that refers to sample images on https://mdn.mozillademos.org/files/.... So that'll fail if you're without WiFi in a spaceship.

Conclusion

Making all of MDN Web Docs available offline is, honestly, not a priority. The focus is on A) a secure production build, and B) a good environment for previewing content changes. But all the pieces are there. Search is a little bit tricky, as an example. When you're running it as a preview server you can't do a full-text search on all the content, but you get a useful autocomplete search widget for navigating between different titles. And the full-text search engine is a remote centralized server that you can't take with you offline.

But all the pieces are there. Somehow. It all depends on your use case and what you're willing to "compromise" on.

Previous page
Next page