Categorygithub.com/keinberger/goScraper
modulepackage
1.0.3
Repository: https://github.com/keinberger/goscraper.git
Documentation: pkg.go.dev

# README

Build Status Go Report Card Go Reference

goScraper

goScraper is a small web-scraping library for Go.

Installation

Package can be installed manually using

go get github.com/keinberger/goScraper

But may also be normally imported when using go modules

import "github.com/keinberger/goScraper"

Usage

The package provides several exported functions to provide high functionality.
However, the main scrape functions

func (w Website) Scrape(funcs map[string]interface{}, vars ...interface{}) (string, error)
func (el lookUpElement) ScrapeTreeForElement(node *html.Node) (string, error)
func (e *Element) GetElementNodes(doc *html.Node) ([]*html.Node, error)

should be the preffered way to use the scraper library.

As these functions use the other exported functions, as well, it provides all the features of the library packed together and guided by only having to provide a minimal amount of input. For the main Scrape() function, the user input is scoped to only having to provide a custom Website variable.

Example using Scrape()

This example provides a tutorial on how to scrape a website for specific html elements. The html elements will be returned chained-together, separated by a custom separator.

The example will use a custom website variable, where the Scrape() function will be called upon. The arguments of the Scrape() function are optional and will not be needed in this example.

package main

import (
	"fmt"
	"github.com/keinberger/goScraper"
)

func main() {
	website := scraper.Website{
		URL: "https://wikipedia.org/wiki/wikipedia",
		Elements: []scraper.Element{
			{
				HtmlElement: scraper.HtmlElement{
					Typ: "h1",
					Tags: []scraper.Tag{
						{
							Typ:   "id",
							Value: "firstHeading",
						},
					},
				},
			},
			{
				HtmlElement: scraper.HtmlElement{
					Typ: "td",
					Tags: []scraper.Tag{
						{
							Typ:   "class",
							Value: "infobox-data",
						},
					},
				},
				Index: 0,
			},
		},
		Separator: ", ",
	}

	scraped, err := website.Scrape(nil)
	if err != nil {
		panic(err)
	}

	fmt.Println(scraped)
}

Example using ScrapeTreeForElement()

This example will use ScrapeTreeForElement, which will return the content of an html element (*html.Node) inside of a bigger node tree. This function is especially useful, if one only wants one html element from a website, but still wants to retain control over formatting settings.

package main

import (
	"fmt"
	"github.com/keinberger/scraper"
)

func main() {
	htmlNode, err := scraper.GetHTMLNode("https://wikipedia.org/wiki/wikipedia")
	if err != nil {
		panic(err)
	}

	element := scraper.Element{
		HtmlElement: scraper.HtmlElement{
			Typ: "li",
			Tags: []scraper.Tag{
				{
					Typ:   "id",
					Value: "ca-viewsource",
				},
			},
		},
	}
	content, err := element.ScrapeTreeForElement(htmlNode)
	if err != nil {
		panic(err)
	}
	fmt.Println(content)
}

Other exported functions

GetElementNodes returns all html elements []*html.Node found in an html code htmlNode *html.Node with the same properties as e *Element

func (e *Element) GetElementNodes(htmlNode *html.Node) ([]*html.Node, error)

GetTextOfNodes returns the content of an html element node *html.Node

func GetTextOfNode(node *html.Node, notRecursive bool) (text string) 

RenderNode returns the string representation of a node *html.Node

func RenderNode(node *html.Node) string

GetHTMLNode returns the node tree *html.Node of the html string data

func GetHTMLNode(data string) (*html.Node, error)

GetHTML returns the HTML data of URL

func GetHTML(URL string) (string, error)

Contributions

I created this project as a side-project from my normal work. Any contributions are very welcome. Just open up new issues or create a pull request if you want to contribute.

# Functions

GetHTML returns the HTML data of URL.
GetHTMLNode returns the node tree of the html string data.
GetTextOfNode returns the content of an html element.
RenderNode returns the string representation of an html.Node.

# Constants

ErrIdxOutOfRange will be returned if the index of an array is out of range.
ErrMissingElement will be returned if the element is issing.
ErrNoNodeFound will be returned if no element was found.

# Structs

Element defines the data structure for an element to be looked up by the scraper.
Error defines the data structure for a custom error.
FormatSettings defines the data structure for optional formatting settings of a LookUpElement.
HtmlElement defines the data structure for an HTML element.
ReplaceObj defines the data structure for an object, that has to be replaced.
Settings defines the data structure for optional settings of a LookUpElement.
Tag defines the data structure for an HTML Tag.
Website defines the website data type for the scraper.

# Type aliases

No description provided by the author