Back to 课程

Computer-Science-A-level-Ocr

0% Complete
0/0 Steps
  1. 3-3-networks
    8 主题
  2. 3-2-databases
    7 主题
  3. 3-1-compression-encryption-and-hashing
    4 主题
  4. 2-5-object-oriented-languages
    7 主题
  5. 2-4-types-of-programming-language
    4 主题
  6. 2-3-software-development
    5 主题
  7. 2-2-applications-generation
    6 主题
  8. 2-1-systems-software
    8 主题
  9. 1-3-input-output-and-storage
    2 主题
  10. 1-2-types-of-processor
    3 主题
  11. 1-1-structure-and-function-of-the-processor
    1 主题
  12. structuring-your-responses
    3 主题
  13. the-exam-papers
    2 主题
  14. 8-2-algorithms-for-the-main-data-structures
    4 主题
  15. 8-1-algorithms
    10 主题
  16. 7-2-computational-methods
    11 主题
  17. 7-1-programming-techniques
    14 主题
  18. 6-5-thinking-concurrently
    2 主题
  19. 6-4-thinking-logically
    2 主题
  20. 6-3-thinking-procedurally
    3 主题
  21. 6-2-thinking-ahead
    1 主题
  22. 6-1-thinking-abstractly
    3 主题
  23. 5-2-moral-and-ethical-issues
    9 主题
  24. 5-1-computing-related-legislation
    4 主题
  25. 4-3-boolean-algebra
    5 主题
  26. 4-2-data-structures
    10 主题
  27. 4-1-data-types
    9 主题
  28. 3-4-web-technologies
    16 主题
课 Progress
0% Complete

Search Engine Indexing

How do search engines work?

  • Search engines work in several stages:

    • Crawling – think of this as gathering all of the books within a library

    • Indexing – think of this as reading the books and making a structured list of the information within the books

    • Ranking – think of this as recommending books to the reader

Crawling

  • Web pages are discovered by search engines through software programs called crawlers (or spiders, bots, or robots) 

  • Crawlers follow links from one webpage to another, systematically visiting pages on the web

  • They start from a set of seed URLs and visit other pages linked from those URLs

  • Website crawlers follow rules and guidelines established by website owners, using mechanisms like the robots.txt file. These guidelines direct crawlers on which areas of a website to explore or avoid, respecting website preferences and ensuring privacy

  • Once a crawler reaches a webpage, it fetches the HTML content of that page

  • The crawler examines the HTML structure and retrieves information, such as text content, headings, links, and metadata

  • To understand the structure of the webpage, the HTML that was retrieved is broken down into individual components

  • This process involves identifying elements, tags, and attributes that hold valuable information like titles and headings

Indexing

  • The data extracted from the webpage is indexed, which involves storing the collected information in a structured manner within a search engine’s database

  •  Each word in the document is included in the page’s index as an entry, along with the word’s position on the page

  • The index allows for quick retrieval and ranking of relevant web pages in response to user queries

Ranking

  • When a user enters a query, the search engine searches the index for matching pages and returns the results they believe are the highest quality and most relevant to the user’s query

Benefits of search engine crawling & indexing

  • The process of search engine indexing is essential for search engines to collect, examine and arrange online content

  • It involves collecting and storing information from web pages in a searchable index

  • There are many reasons for search engine crawling and indexing to happen:

    • Improved search results

    • Efficient retrieval

    • Ranking and relevance

    • Freshness and updates

Improved search results

  • Indexing webpages means search engines can:

    • Provide users with relevant and up-to-date search results

    • Match user searches with content which increases the chances of accurate and valuable results

  • This means the user is more likely to find what they’re looking for quickly, ideally on the first page of search results, without having to go to additional pages

Efficient retrieval

  • Indexing enables efficient retrieval of information

  • Search engines don’t need to scan the entire web for every search query. They can just search their indexed data to produce search results quickly

Ranking & relevance

  • Indexing enables search engines to assess the relevance and quality of web pages

  • Search result rankings are determined by various ranking algorithms that analyse indexed data. These algorithms consider factors such as keyword relevance, backlinks, and user engagement 

Freshness & updates

  • Search engine crawlers periodically revisit indexed web pages to detect updates and changes

  • This process guarantees that the search results display the latest content that is currently accessible on the Internet

  • If a webpage has been updated and not re-crawled, the page may no longer be relevant for the user’s search

Responses

您的邮箱地址不会被公开。 必填项已用 * 标注