Thursday, February 27, 2025

Huge data dump on USOs (Unidentified Submerged Objects), UFOs and water - Part 1 - Marco Bianchini "USOCAT" catalogue, Richard Dolan spreadsheet, Reddit "USOs" post collection, Jan Aldrich's NavCat, "Russia's USOs Secrets" etc

Introduction

As outlined (with links) in the sections below, I have a few treats for anyone interested in reports of "USOs" (Unidentified Submerged Objects)...

Italian USO / UFO researcher, Marco Bianchini, has kindly given me permission to upload a copy of his "USOCAT" catalogue of Italian USO cases. Links below.

This week, Richard Dolan also helpfully shared on his website a PDF of his spreadsheet of USO sightings. I include a link below to that document on his website and also include code I've generated so that anyone interested can easily "reverse engineer" that PDF back into an Excel spreadsheet. (I'm not uploading a copy of the Excel file itself as I haven't received permission to add it).

I've also created a chronological summary (125 pages) of Reddit posts to the "USOs" subreddit.

As an experiment with the recently released "Deep Research" tool from ChatGPT, I have uploaded a report generated automatically by that tool on the USO topic.  

Finally, I've uploaded a "podcast" generated by AI software after being fed the book "Russia's USO Secrets" by Paul Stonehill and Philip Mantle (with permission from both of those authors).











(1) Marco Bianchini "USOCAT" catalogue 

USO / UFO researcher, Marco Bianchini, is a member of Italian UFO group CISU ("Centro Italiano Studi Ufologici"): 
https://www.cisu.org/

Marco has kindly given me permission to upload a copy of his "USOCAT" catalogue to my online archive hosted by the AFU in Sweden.  USOCAT provides details of reported observations of USOs in Italy's rivers, lakes and territorial waters:
https://files.afu.se/Downloads/?dir=.%2FDatabases%2FMarco%20Bianchini%20USOcat


(I really should find the time to attempt to get permission to upload various other UFO/USO catalogues and databases.  I now have over 70 in my off-line collection and some of their creators may be willing for me to upload them, although I know some will not...). 

Marco has also helpfully given permission me to add a copy of his list of books on the USO topic in chronological order. I think this document is useful for anyone doing any deep dives (no pun intended...) into USOs. I have included that further document from Marco in the same folder as his USOCAT, at the link above



(2) Reddit "USOs" post collection

I've also created a chronological summary (125 pages) of Reddit posts to the "USOs" subreddit.

That PDF can be downloaded here:
https://files.afu.se/Downloads/Websites/Reddit/Subreddit%20-%20USOs/Koi%2C%20Isaac%20-%20Chronology%20of%20USOs%20Subreddit%20Posts.pdf






(3) Automated podcast about the book "Russia's USO Secrets"

I recently wrote about some of my recent experiments to apply Artificial Intelligence to UFO research by using an AI tool to generate podcast-style discussions of various UFO books, scientific papers and UFO documents. The voices of the podcast “hosts” are AI generated, as is all the discussion. I thought the results were interesting and potentially useful (although obviously there is a lot of room for improvement, which I anticipate will occur fairly rapidly…). 

I’ve now uploaded to Youtube link below a further such “podcast” which discusses a book on Russian USOs, "Russia's USO Secrets" by Paul Stonehill and Philip Mantle.

https://youtu.be/lLo8HtIuI5w




One of the co-authors of that book, Paul Stonehill, also has various videos on Youtube about Russian USO reports, including the following:











(4) Jan Aldrich's NavCAT

Jan Aldrich of Project 1947 has previously shared on his website a "draft" catalogue of "UFOs/USOs Report by Seagoing Services".  

Jan has previously given me permission to archive the entirety of the Project 1947 website, so I've now added a PDF copy of the NavCat catalogue to my online folder of UFO/USO catalogues and databases, specifically at:




(5) ChatGPT "Deep Research" on USOs 

As an experiment with the recently released "Deep Research" tool from ChatGPT, I have uploaded a report generated automatically by that tool on the USO topic:
https://files.afu.se/Downloads/Websites/AI/2025%20-%20ChatGPT/o3%20Deep%20Research/0%20-%20Other/USOs/AI%20-%202025%20-%20o3%20Deep%20Research%20-%20USOs%20-%20Unidentified%20Submerged%20Objects.pdf

I think the results are sufficiently interesting to be worth expanding upon in a separate mini-project, which I hope to post within the next day or two.





(6) Richard Dolan's USOs spreadsheet

Earlier this week, Richard Dolan helpfully shared on his website a PDF of his spreadsheet of USO sightings.  That spreadsheet includes it includes the data for all three volumes of his books "A History of USOs".

Here is some code I've generated so that anyone interested can easily "reverse engineer" (yes, UFO pun intended...) that PDF back into an Excel spreadsheet (or CSV file etc) so that its content can be assimilated (yes, Star Trek pun intended...) more easily into a unified collection of various UFO catalogues and databases. 

(I'm not uploading a copy of the Excel file itself as I haven't received permission from Richard Dolan to add it to my online archive).


import pdfplumber
import csv

def extract_complete_sheet(pdf_path):
    """
    Extracts the COMPLETE sheet table from the PDF using pdfplumber's extract_table() method.
    It looks for pages containing the marker "USO History Spreadsheet Data COMPLETE" and
    collects table data from those pages. Duplicate header rows (starting with "DATE") are skipped.
    Any semicolons in cells are removed.
    """
    complete_data = []
    header = None

    with pdfplumber.open(pdf_path) as pdf:
        for page in pdf.pages:
            page_text = page.extract_text() or ""
            # Process only pages that mention the COMPLETE sheet
            if "USO History Spreadsheet Data COMPLETE" in page_text:
                table = page.extract_table()
                if table:
                    for row in table:
                        # Skip rows that are completely empty
                        if not any(cell and cell.strip() for cell in row):
                            continue
                        # Remove semicolons from each cell (if cell is None, use empty string)
                        clean_row = [cell.replace(";", "") if cell else "" for cell in row]
                        # Identify header rows by checking if the first cell starts with "DATE"
                        if header is None and clean_row[0].strip().upper().startswith("DATE"):
                            header = clean_row
                            complete_data.append(header)
                        else:
                            # Skip duplicate header rows
                            if header and clean_row[0].strip().upper().startswith("DATE"):
                                continue
                            complete_data.append(clean_row)
    return complete_data

def write_csv(data, output_csv):
    """
    Writes the extracted table data to a CSV file using semicolons as delimiters.
    """
    with open(output_csv, "w", newline="", encoding="utf-8") as f:
        writer = csv.writer(f, delimiter=";")
        for row in data:
            writer.writerow(row)

if __name__ == "__main__":
    pdf_path = "richard-dolan-uso-history-spreadsheet-data.pdf"  # Relative path (file in same folder)
    output_csv = "dolan USO spreadsheet.csv"                      # Output file in same folder

    data = extract_complete_sheet(pdf_path)
    if not data:
        print("No data extracted. Please verify that the PDF contains the expected COMPLETE sheet table.")
    else:
        write_csv(data, output_csv)
        print(f"CSV file has been saved as '{output_csv}'")

This code was created using a free online AI tool (Grok 3) in a matter of minutes.  My main interest in sharing this code is to stimulate thinking about use of AI to generate many sets of similar code to assimilate data from individual UFO books and UFO articles into structured electronic formats and (permissions allowing...) potentially assimilating that data into more comprehensive collections.








No comments:

Post a Comment