Can a man-made computer become conscious?

Discuss any topics related to metaphysics (the philosophical study of the principles of reality) or epistemology (the philosophical study of knowledge) in this forum.

Re: Can a man-made computer become conscious?

Post Number:#2686  Postby JamesOfSeattle » September 13th, 2017, 4:23 pm

Belindi wrote:But if qualia were nothing but physical events your nearest and dearest, at least, could experience your perceptions.

But if digestion were nothing but physical events your friends could digest your food?

AI machines' conceptualisations are digital

AI machines can be analogical. Look at the neuromorphic chips.

Minds as usually understood imply privileged access.

That's fine, but machines can have privileged access to their experiences also.

When I wanted to recall that phrase 'privileged access' I consulted my own personal memories, which are memories of events from my perspective and mine alone. You could not have a clue as to the connotations that I personally cluster around the half-remembered phrase. Another person would recall the same phrase according to their own personal and unique memory systems.

And another machine could recall the same phrase according to its own personal and unique memory systems. But we haven't built that machine yet. It isn't google. It's still 1962 and we're headed to the moon. There's no reason to think we can't get there, but it's gonna take some work.

*

-- Updated September 13th, 2017, 1:38 pm to add the following --

Tamminen wrote:I surrender.


If only that meant you understand what I'm saying ...
James really seems to think that there is no fundamental difference between us and robots.

Of course there are fundamental differences, but the capacity for consciousness ain't one. And we haven't yet created a robot that's comparable, but we will.
Maybe some day we'll see machine-Hamlets struggling about the meaning of being and planning suicide.

Count on it.
Some philosophers say that philosophical discussion is impossible because we speak of different things. Sometimes I think they are right. But, on the other hand, in this case there seems to be a concrete question, but still we cannot find a common language. What is the problem with us?

It's really hard to overcome your own mental intuitions and biases, much more so someone else's.

*
(I feel like I'm coming off as arrogant, but I just need evidence and/or reason to change my mind, and such has not been forthcoming.)
User avatar
JamesOfSeattle
 
Posts: 268 (View: All / In topic)

Joined: October 16th, 2015, 11:20 pm

Re: Can a man-made computer become conscious?



Become a member for less ads

Already a member? Login
 

Re: Can a man-made computer become conscious?

Post Number:#2687  Postby Tamminen » September 13th, 2017, 5:01 pm

JamesOfSeattle wrote:(I feel like I'm coming off as arrogant, but I just need evidence and/or reason to change my mind, and such has not been forthcoming.)

No, you are not arrogant, only wrong, and that is my opinion, you surely keep yours.
Tamminen
 
Posts: 173 (View: All / In topic)

Joined: April 19th, 2016, 2:53 pm

Re: Can a man-made computer become conscious?

Post Number:#2688  Postby Steve3007 » September 13th, 2017, 5:56 pm

Tamminen to JamesOfSeatlle:
I surrender. James really seems to think that there is no fundamental difference between us and robots. Maybe some day we'll see machine-Hamlets struggling about the meaning of being and planning suicide.


I haven't read all of the previous posts, but I get the distinct impression that you think this could never, ever happen? That this "quintessence of dust" will always be fundamentally separate from actual dust?

If so, is it because you think that there is something fundamentally different inside human beings (or perhaps inside all living things?) that will make it forever impossible, even in principle, to replicate other than by the usual biological method? In other words, would you say that you're a "duelist" of some form?

If not, and if you think that the thing which makes us human and gives us consciousness is some complex property of the arrangement of the atoms in our brain and body, then surely, at least in principle, this could be replicated?
"Even men with steel hearts love to see a dog on the pitch."
Steve3007
 
Posts: 3776 (View: All / In topic)

Joined: June 15th, 2011, 5:53 pm
Location: UK
Favorite Philosopher: Eratosthenes

Re: Can a man-made computer become conscious?

Post Number:#2689  Postby Tamminen » September 14th, 2017, 3:18 am

Steve3007 wrote:If so, is it because you think that there is something fundamentally different inside human beings (or perhaps inside all living things?) that will make it forever impossible, even in principle, to replicate other than by the usual biological method? In other words, would you say that you're a "duelist" of some form?

Yes, I am a dualist in the sense that the subject-object relation is fundamental and that the subject is not a property of matter but something that is already there as an ontological precondition of being. Material objects like our bodies and robots are our instruments of being. Consciousness is the self-evident 'I am' of Descartes taken ontologically, not only epistemologically. Threfore I think that only natural organisms can be conscious.
Tamminen
 
Posts: 173 (View: All / In topic)

Joined: April 19th, 2016, 2:53 pm

Re: Can a man-made computer become conscious?

Post Number:#2690  Postby Belindi » September 14th, 2017, 5:50 am

James of Seattle wrote:

But if digestion were nothing but physical events your friends could digest your food?


Pre-digested food is not uncommon. Glucose can also be given intravenously. Dogs and some other animals regurgitate to feed their young.

Digestion can sometimes be felt, when it goes wrong in which case digestion has also a mental aspect.
Belindi
 
Posts: 803 (View: All / In topic)

Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post Number:#2691  Postby Steve3007 » September 14th, 2017, 6:02 am

Tamminen:
Yes, I am a dualist in the sense that the subject-object relation is fundamental and that the subject is not a property of matter but something that is already there as an ontological precondition of being. Material objects like our bodies and robots are our instruments of being. Consciousness is the self-evident 'I am' of Descartes taken ontologically, not only epistemologically. Threfore I think that only natural organisms can be conscious.


Fair enough, but as I'm sure you're aware, this presents you with a problem. How do you define a natural organism? It's easy enough at the extreme ends of the spectrum. Humans are conscious. Rocks are not. But there is a quasi-continuum from humans to the world of complex but non-living chemistry (and therefore to the world of simple chemistry and on to physics). Are other apes conscious? Are other mammals conscious? Are plants conscious? Are bacteria conscious? Are viruses conscious? Are viroids conscious?

You have to draw an arbitrary dividing line and state "from here on up, I decree, consciousness is present". The fact that the dividing line is arbitrary (by which I mean it's placed there by us for our purposes and is not an objective, immovable property of Nature) makes it difficult to sustain this idea that there is some mysterious quintessence that only conscious beings possess.
"Even men with steel hearts love to see a dog on the pitch."
Steve3007
 
Posts: 3776 (View: All / In topic)

Joined: June 15th, 2011, 5:53 pm
Location: UK
Favorite Philosopher: Eratosthenes

Re: Can a man-made computer become conscious?

Post Number:#2692  Postby Belindi » September 14th, 2017, 6:07 am

James of Seattle wrote:

AI machines can be analogical. Look at the neuromorphic chips.


Sorry, I don't understand.

(Belindi wrote)Minds as usually understood imply privileged access.

(James replied)That's fine, but machines can have privileged access to their experiences also.


But they can potentially share the excact same experience. We cannot.

(Belindi wrote)When I wanted to recall that phrase 'privileged access' I consulted my own personal memories, which are memories of events from my perspective and mine alone. You could not have a clue as to the connotations that I personally cluster around the half-remembered phrase. Another person would recall the same phrase according to their own personal and unique memory systems.

(James replied)And another machine could recall the same phrase according to its own personal and unique memory systems. But we haven't built that machine yet. It isn't google. It's still 1962 and we're headed to the moon. There's no reason to think we can't get there, but it's gonna take some work.


But have AI machines personal and unique memories? If so, my notion of what computers are and do is very wrong. I would have thought that computers can potentially share any information whatsoever. In which case any information is cyberspace if it's private is only potentially private and can be accessed publicly even if the human originator first has to assent. Qualia, on other hand cannot ever be transferred (except perhaps in the special case of those conjoined twins ).

It's conceivable that what you and I have on our desks really are literally computer terminals, i.e. are individuals only in the sense that they can be plugged into different electric points and occupy different spaces. You and I are true individuals. Even had we two been reared by the same parents and had the same genes as do identical twins, we could not ever feel each other's qualia.

-- Updated September 14th, 2017, 6:27 am to add the following --

James of Seattle wrote:

And that no machine at this present time qualifies its learning?

I'm not sure I understood this sentence, but I will agree to something near ... no machine at this present time "qualiafies" it's experiences. I have said that qualia are experiences plus the further experiences that directly result, so the qualia of seeing red is the original experience of seeing red plus further experiences that are triggered by the first recognition of red. To date most machines have not been built to generate further experiences (recognitions) after the first, so their qualia would be minimal. Okay, if you want to require that qualia requires at least one further experience beyond the original recognition, then you could say they don't have qualia. However, there are people working on machine architectures that are more like the architectures in the human brain. IBM and Chris Eliasmith (not associated with IBM) come to mind. I haven't verified this, but I anticipate that these architectures will be getting closer to human-like qualia. I think they will be related to what Eliasmith calls semantic pointers.


I'm not sure that I understand what I wrote. I won't say "sorry" because I am like most other people uncertain about artificial intelligence. Just trying out a word to see if a new meaning fits.

I put the word 'learning' in my sentence to add to its import, which is that people do learn from experiences, and so I think do computers.

I would not attribute anything else to qualia besides the "original experience". If further experiences are added, the singular experience becomes infiltrated by memories which incorporate the quale or qualia in a concept. For instance there is your concept of 'red' which is made of thousands of memories of red things and sometimes includes one further quale which you label 'red' and file it too away with your concept 'red.

I accept that machines may be made that do what the brain does in terms of qualia. When this happens the machines will in important regards be saddled with responsibilities and rights . Like us they will need to stay humanised and not revert to narrow machine think. If the qualia-capable machines cannot be made so that they are capable of resonsibility they will have to be ethically controlled as they will be terribly dangerous.
Belindi
 
Posts: 803 (View: All / In topic)

Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post Number:#2693  Postby Steve3007 » September 14th, 2017, 6:30 am

James:
AI machines can be analogical. Look at the neuromorphic chips.


Belindi:
Sorry, I don't understand.


I suspect he means that they can be analogue as well as digital. Digital meaning that they store information in discrete chunks (1's and 0's) and analogue meaning that they use continuous variations. Analogue computers that use continuous variations in voltages to perform their calculations and store their data can be constructed (I remember making one years ago as part of an electronics course at Uni). I think it's a distinction that is not directly relevant to the question of the principle of whether an artificially created machine can be said to be conscious. It's a detail (albeit a central one) of the mechanism.
"Even men with steel hearts love to see a dog on the pitch."
Steve3007
 
Posts: 3776 (View: All / In topic)

Joined: June 15th, 2011, 5:53 pm
Location: UK
Favorite Philosopher: Eratosthenes

Re: Can a man-made computer become conscious?

Post Number:#2694  Postby Belindi » September 14th, 2017, 7:28 am

Please take into account my complete ignorance of computer construction.

Steve wrote:

I suspect he means that they can be analogue as well as digital. Digital meaning that they store information in discrete chunks (1's and 0's) and analogue meaning that they use continuous variations. Analogue computers that use continuous variations in voltages to perform their calculations and store their data can be constructed (I remember making one years ago as part of an electronics course at Uni). I think it's a distinction that is not directly relevant to the question of the principle of whether an artificially created machine can be said to be conscious. It's a detail (albeit a central one) of the mechanism.



I thought that we differentiate one quale (say, a'pain') from another quale (say, a 'pleasure') because qualia were in fact continuously various; not all -or -nothing. So might have 'slight pain ' or we might have 'huge pleasure'. Is this a matter for neuroscience? In particular the functioning of the part of the brain that deals in pain and the functioning of the part that deals in pleasure?

The colour spectrum is presumably an objective measure of colour , whatever else colour spectrometry is used for. In this case, it;s reasonabke to presume that human vision is attuned to the colour spectrum, insofar as it is attuned , due to natural selection. So red for instance ,according to qualia hypothesis , triggered by an aspect of the colour spectrum in operation. Human individuals vary ; mind and culture inhibit perceptions. Red is therefore various in its intensity, focus, and hue according to the individual human experiencing the quale. Some cultures of belief do inhibit inherent abilities.(The labelling is neither here nor there and is a matter of the social usage of language). The human individual if free from cultural inhibition does, I presume see red as intense, bright, and pure as to hue. However the qualia he subsequently names as 'red' vary in intensity according to the objective intensity of the source.

Now, I don't know whether or not an analogue computer would remember a continuity of information about intensity of the source(tonal value), brightness of the source, or hue of the source, as do we humans. Or would the analogue computer be like the digital computer and record all-or-nothing?

What 'analogue' means to me is continuity as conferred by the physical world source. However as I said I know nothing about computers, and to make me even more muddled, cyberspace has more info than any human could have.
Belindi
 
Posts: 803 (View: All / In topic)

Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post Number:#2695  Postby Steve3007 » September 14th, 2017, 8:05 am

Belindi:
I thought that we differentiate one quale (say, a'pain') from another quale (say, a 'pleasure') because qualia were in fact continuously various; not all -or -nothing...


I guess. I don't know much about it. But if we do believe that our brains are large collections of interconnected neurons and that everything about our personality and our feelings is somehow contained in the configuration of those interconnections (as opposed to believing that there is something else, which is fundamentally non-physical, which defines us) then I presume any particular quale would be a large very complex mixture of millions of connections and would probably vary a lot. I would have thought there would be no simple, hard mapping of a quale to neuron interconnections. A quale seems to be far too high a level concept for that.

The colour spectrum is presumably an objective measure of colour , whatever else colour spectrometry is used for.


There are various measures of colour, and some do take into account the perception of colour by humans. So in that sense they are subjective. But I guess you could call a colour spectrum which simply plots the wavelength of light on a scale about as objective as it gets.

Now, I don't know whether or not an analogue computer would remember a continuity of information about intensity of the source(tonal value), brightness of the source, or hue of the source, as do we humans. Or would the analogue computer be like the digital computer and record all-or-nothing?


Yes, I guess an analogue computer could indeed store colours as a continuum. The digital computer stores it as discrete values. So, for example, a colour is typically stored as 3 numbers (representing red, green and blue) each of which has 8 binary digits and which can therefore have a value between 0 and 255. With 256 different levels of red, green and blue that's about 16.7 million different colours. The number of different colours stored in an idealised analogue computer would, in principle, tend to infinity.

What 'analogue' means to me is continuity as conferred by the physical world source. However as I said I know nothing about computers, and to make me even more muddled, cyberspace has more info than any human could have.


Yes, in computing and electronics analogue also means continuous and digital means discrete.
"Even men with steel hearts love to see a dog on the pitch."
Steve3007
 
Posts: 3776 (View: All / In topic)

Joined: June 15th, 2011, 5:53 pm
Location: UK
Favorite Philosopher: Eratosthenes

Re: Can a man-made computer become conscious?

Post Number:#2696  Postby Tamminen » September 14th, 2017, 8:59 am

Steve3007 wrote:You have to draw an arbitrary dividing line and state "from here on up, I decree, consciousness is present". The fact that the dividing line is arbitrary (by which I mean it's placed there by us for our purposes and is not an objective, immovable property of Nature) makes it difficult to sustain this idea that there is some mysterious quintessence that only conscious beings possess.

I admit it is difficult for us to draw a line between conscious organisms and non-conscious things, but I think it is an on-off situation: there is consciousness or there is not, because consciousness is essentially subjectivity, and in my view subjectivity is not a property of matter. I would say there is consciousness if there is a temporal present, an elementary unit of subjective time. But it may be impossible for us to detect if a thing is conscious or not. However, I suppose the minimum criterion is a natural evolution of the things that consciousness adopts as its insruments of being. I cannot prove this, though.
Tamminen
 
Posts: 173 (View: All / In topic)

Joined: April 19th, 2016, 2:53 pm

Re: Can a man-made computer become conscious?

Post Number:#2697  Postby Belindi » September 14th, 2017, 1:54 pm

Steve wrote:

A quale seems to be far too high a level concept for that.


I think that a quale is as low level as you can get and still be aware. I think that a quale is not a concept but is a simple percept.Even simpler than a percept, if such is possible.

-- Updated September 14th, 2017, 1:56 pm to add the following --

Steve wrote:

in computing and electronics analogue also means continuous and digital means discrete.


I am happy to know that and hope to remember that nice sentence verbatim. :)

-- Updated September 14th, 2017, 1:58 pm to add the following --

P.s. I was taught that neuronal action which is electro chemical is all or nothing = discrete.
Belindi
 
Posts: 803 (View: All / In topic)

Joined: September 11th, 2016, 2:11 pm

Re: Can a man-made computer become conscious?

Post Number:#2698  Postby JamesOfSeattle » September 14th, 2017, 7:14 pm

Belindi, I really appreciate your trying to engage with me in this topic. The concepts I'm trying to communicate are not intuitive and I appreciate the practice.

Regarding your several examples of "shared digestion", I submit that those examples are of serial digestion as opposed to simultaneous digestion. A pertinent example would be if the twins shared a stomach and both benefitted from the digestion that happened there.

Belindi wrote:But [machines] can potentially share the excact same experience.

[Note: in the following I make statements based on my ideas, which ideas are not generally accepted yet. I think they will be, eventually.]
I disagree. The experiences can be extremely similar (two identical machines running identical software) , but experience requires hardware and experiences on different hardware cannot be shared. Even the twins, who share some, but not all, of the neuronal hardware, do not have the same experiences. An experience is defined (by me) as an event in which an agent is given input and produces output. [NOTE: the "agent" is not what we assign consciousness to. We assign consciousness to a larger system which includes the agent.]. In the case of the twins, they share parts of the agent (via sharing part of the thalamus, at least), and so they also share some of the inputs (via the shared part of thalamus). Also, the sequellae of the experience (qualia) from experiences of one may become available to the agent of the other. This is how one might have an idea of what the other is thinking.
But have AI machines personal and unique memories? If so, my notion of what computers are and do is very wrong.

I think it very likely your notion of what computers do is incomplete as Steve explains above. I would also suggest that your notion of information is incomplete. Don't worry, you're in good company. Many/most people's concept of information is based on Claude Shannon's work, but I suggest his work applies only to a subset of information, namely, coded symbolic information.
I would have thought that computers can potentially share any information whatsoever.

Only general purpose/universal computers (as described by Turing) can potentially share any coded information whatsoever. And such computers can simulate any analogue computers to any accuracy short of perfection, but the closer you want to get to perfection, the longer the calculations take. So the neuromorphic chips being produced by IBM are more like analogue computers, and are specifically not universal computers. The programs that run on the chips can also be run on universal computers, but they run much slower

Qualia, on other hand cannot ever be transferred (except perhaps in the special case of those conjoined twins ).

Agreed.
I would not attribute anything else to qualia besides the "original experience". If further experiences are added, the singular experience becomes infiltrated by memories which incorporate the quale or qualia in a concept. For instance there is your concept of 'red' which is made of thousands of memories of red things and sometimes includes one further quale which you label 'red' and file it too away with your concept 'red.

I would accept this, but then I would say if exactly one experience happened, you would never know it. For example, if only one of the "red" photo receptors in one of your eyes triggered exactly one time, you would never know it. It's only when (I'm guessing) thousands of them trigger repeatedly over a sufficiently long period of time that you notice.

I accept that machines may be made that do what the brain does in terms of qualia. When this happens the machines will in important regards be saddled with responsibilities and rights . Like us they will need to stay humanised and not revert to narrow machine think. If the qualia-capable machines cannot be made so that they are capable of resonsibility they will have to be ethically controlled as they will be terribly dangerous.

I think you are conflating perception and intelligence and emotion. I think those are three very distinct things.

*

-- Updated September 14th, 2017, 8:33 pm to add the following --

Belindi wrote:Steve wrote:

A quale seems to be far too high a level concept for that.


I think that a quale is as low level as you can get and still be aware. I think that a quale is not a concept but is a simple percept.Even simpler than a percept, if such is possible.

I'm with Belindi on this one.
I was taught that neuronal action which is electro chemical is all or nothing = discrete.

This is true, but the decision to fire is analogue.

*
User avatar
JamesOfSeattle
 
Posts: 268 (View: All / In topic)

Joined: October 16th, 2015, 11:20 pm

Re: Can a man-made computer become conscious?

Post Number:#2699  Postby Tamminen » September 15th, 2017, 4:42 am

Steve:
...that there is some mysterious quintessence that only conscious beings possess.

I think you did not get it. It is not a substance a´la Descartes, but consciousness itself as an original and fundamental precondition of all being whatsoever. If there is an 'I am' or a subjective experience of subjective time, then there is consciousness, if not, there is no consciousness. This is an ontological statement. And note that consciousness can be a precondition of all being although all being is not conscious.

This is a good example of two incompatible horizons that lead to two incompatible languages and a total lack of understanding. Or am I too pessimistic here?
Tamminen
 
Posts: 173 (View: All / In topic)

Joined: April 19th, 2016, 2:53 pm

Re: Can a man-made computer become conscious?

Post Number:#2700  Postby Belindi » September 15th, 2017, 5:33 am

James of Seattle wrote:

Belindi, I really appreciate your trying to engage with me in this topic. The concepts I'm trying to communicate are not intuitive and I appreciate the practice.

That sounds nice to me. I likewise.

James:
Regarding your several examples of "shared digestion", I submit that those examples are of serial digestion as opposed to simultaneous digestion. A pertinent example would be if the twins shared a stomach and both benefitted from the digestion that happened there.


"The twins shared--------both benefitted--------". What differentiates one twin from another? Objectively their nearest and dearest know that the twins have different personalities and behaviour. Subjectively the twins feel themselves to be separate beings. At this juncture enters the dreaded word 'Why'
How the twins have different personalities, and how they are different subjects of experience may be addressed like Spinoza would address the question. The twins' ideas of themselves are the ideas of their bodies such as they are.

Or how the twins are different subjects of experience may be addressed existentially, as I fancy Tamminem was doing a few posts back when he quoted "I am". Each twin is Dasein thrown into an individual life: each twin must engage as an individual.

I fancy that the "Spinoza" explanation is a How? explanation. And that the "Heidegger" explanation is a Why? explanation.
-___________________________________

I will have to stop wittering about computers of which I know and understand so very little. I'll confine my remarks to qualia of which I feel a glimmering of comprehension.




Belindi wrote)
Qualia, on other hand cannot ever be transferred (except perhaps in the special case of those conjoined twins ).

(James)Agreed.

(Belindi)
I would not attribute anything else to qualia besides the "original experience". If further experiences are added, the singular experience becomes infiltrated by memories which incorporate the quale or qualia in a concept. For instance there is your concept of 'red' which is made of thousands of memories of red things and sometimes includes one further quale which you label 'red' and file it too away with your concept 'red.

(James)I would accept this, but then I would say if exactly one experience happened, you would never know it. For example, if only one of the "red" photo receptors in one of your eyes triggered exactly one time, you would never know it. It's only when (I'm guessing) thousands of them trigger repeatedly over a sufficiently long period of time that you notice.


I am persuaded that qualia are at that cusp of awareness which is not actually manifested in waking consciousness. It may be that what is manifested in conscious awareness is conceptualisations. Dreams are sheer qualia which transform into narratives during the initial emergence of the waking state. What we call dreams are confabulations. There is a great deal of neuroscience regarding brain chemistry , how there are fluctuations that change our states of consciousness through waking awareness, dreaming, deep dreamless sleep, lucid dreaming, and occasionally hallucinating.

(Belindi)
I accept that machines may be made that do what the brain does in terms of qualia. When this happens the machines will in important regards be saddled with responsibilities and rights . Like us they will need to stay humanised and not revert to narrow machine think. If the qualia-capable machines cannot be made so that they are capable of resonsibility they will have to be ethically controlled as they will be terribly dangerous.[/(James)I think you are conflating perception and intelligence and emotion. I think those are three very distinct things.


I am not conflating those. I am claiming that because qualia are the cause of our feelings of self , and because individual bodies are objective selves, then selves are those special agents of change which are, as an existential fact, responsible beings. In short we cannot escape our responsibility to act and even cowering in a corner is action.

James wrote, quoting me:


I was taught that neuronal action which is electro chemical is all or nothing = discrete.

This is true, but the decision to fire is analogue.

But "the decision" is also a confabulation of what the brain/mind has determined will happen. James, your Free Will is showing, inadvertently I guess.

-- Updated September 15th, 2017, 5:34 am to add the following --

Sorry my editing became a little haywire at the end
Belindi
 
Posts: 803 (View: All / In topic)

Joined: September 11th, 2016, 2:11 pm

PreviousNext

Return to Epistemology and Metaphysics

Who is online

Users browsing this forum: No registered users and 6 guests

Philosophy Trophies

Most Active Members
by posts made in lasts 30 days

Avatar Member Name Recent Posts
Greta 162
Fooloso4 116
Renee 107
Ormond 97
Felix 90

Last updated January 6, 2017, 6:28 pm EST

Most Active Book of the Month Participants
by book of the month posts

Avatar Member Name BOTM Posts
Scott 147
Spectrum 23
Belinda 23
whitetrshsoldier 20
Josefina1110 19
Last updated January 6, 2017, 6:28 pm EST