PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 6th of August, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:35:45] <klevison> about references, is a best practice for it? I've read about many approaches.
[02:10:35] <sputnik13> hello anyone around?
[02:11:33] <sputnik13> I have a 32bit system running mongodb, and it hit the 2GB mark and now refuses to start... I need to get to at least delete some data and get the thing back up, is there a way to delete objects from a mongodb database file offline?
[02:13:15] <joannac> mongodump from the dbpath, and move to 64bit
[02:13:30] <sputnik13> right, not an option right now
[02:13:41] <joannac> which part?
[02:13:44] <sputnik13> both
[02:13:58] <sputnik13> I need to truncate the database in place
[02:14:06] <joannac> start with --noprealloc ?
[02:14:07] <sputnik13> no way to do something like this?
[02:14:49] <joannac> if it starts, start removing documents
[02:15:34] <sputnik13> nope, DBException 13636
[02:16:27] <sputnik13> mmap private failed with out of memory
[02:16:53] <joannac> willing to take data loss?
[02:17:06] <sputnik13> yes, but I think I got it up
[02:17:14] <joannac> cool, how?
[02:17:16] <sputnik13> launching on CLI without journal seems to allow it to come up
[02:17:23] <sputnik13> the config file had journal turned on
[02:17:37] <joannac> cool
[02:17:55] <joannac> hope you didn't have any unplayed jounral entries
[02:18:06] <joannac> although you can't do anything about that now
[02:56:29] <sputnik13> ok, I got most documents deleted from collections, but the space isn't freeing up... how do I force a cleanup?
[03:31:49] <joannac> sputnik13: how much spare disk space do you have
[03:31:54] <joannac> and how much data?
[03:35:07] <sputnik13> joannac: not enough :(
[03:35:23] <sputnik13> joannac: I just did a mongoexport on the collections I had to save
[03:35:31] <sputnik13> and dropped the database, it worked out
[03:35:38] <sputnik13> joannac: thanks :-)
[06:50:48] <olivierrr1> hello
[07:08:56] <joannac> hi
[07:15:03] <snowmanas> HI. I have a problem. I have migrated my replicaset to new hardware and did all of the rs.reconfig(cfg) stuff etc.
[07:15:29] <snowmanas> in rs.status() I see all new members present, 1 primary and 2 secondaries, however in sh.status through mongos I see the old ones in the "Shard" section
[07:15:44] <snowmanas> how can I update the config databse in order to have the correct replica sets?
[07:19:08] <joannac> you'll have to modify the config database, and then bounce all your members
[07:21:34] <joannac> Migrating bit by bit would've avoided this
[07:21:41] <joannac> http://docs.mongodb.org/manual/tutorial/migrate-sharded-cluster-to-new-hardware/#migrate-a-replica-set-shard
[09:23:41] <davidcsi> hello guys, I wonder if you can help me out with a problem I'm having with c++'s driver... I'm executing an update with the upsert flag as true, the query matches exactly, but the driver is inserting a new one instead of "pushing" in the existing doc. I set the profiling on 2 and copied/pasted the query/update on the shell and it works perfectly... any ideas? I posted the question on mongodb-user's list and on stackoverflow (http:/
[09:25:29] <rspijker> davidcsi: the link got cut off :)
[09:25:30] <ernetas> Derick: oh, by the way, we managed to extract all of the data! My colleague wrote an extractor in golang with another mongo driver. Worked like a charm, although it tooks us a day to scan 300 gigs of disk dumps, verify if data stored in BSONs isn't malformed and then do mongorestore with it on an old backup...
[09:25:58] <Derick> ernetas: wow! Where is the blog post and where is the script? :-)
[09:26:06] <davidcsi> http://stackoverflow.com/questions/25149384/mongodb-c-driver-does-not-upsert?noredirect=1#comment39156660_25149384
[09:26:16] <rspijker> ernetas: that’s awesome man!
[09:27:22] <Derick> davidcsi: I would guess it has to do with data types. "523331461111" is a string, do you use it as a number in C++?
[09:27:50] <Derick> your c++ snippet is missing that bit
[09:28:20] <davidcsi> @derick, no, all strings
[09:29:06] <Derick> davidcsi: do update the question though with how you create query_otid_or and query_sccp_or
[09:30:57] <howdoi> can't $pushAll take an empty value?
[09:31:05] <howdoi> say, null
[09:31:22] <Derick> I think it wants an array
[09:31:26] <Derick> did you try an empty array?
[09:31:41] <howdoi> I don't want an empty array
[09:31:50] <Derick> ?
[09:32:09] <Derick> sorry - you need to be more clear with what you have, and what you want to obtain then
[09:34:09] <howdoi> Derick: empty array works, but say if i $pushAll again it would be like [ [] ]
[09:34:15] <davidcsi> @Derick, it's updated: http://stackoverflow.com/questions/25149384/mongodb-c-driver-does-not-upsert
[09:34:41] <rspijker> howdoi: it shouldn’t be…
[09:34:54] <rspijker> pushall pushes the values in the ARRAY you give it
[09:35:01] <howdoi> rspijker:nice
[09:35:03] <howdoi> thanks gys
[09:35:06] <rspijker> so if you give it an empty array it shouldn’t push anything
[09:35:25] <howdoi> i was under that assumption
[09:39:31] <Derick> davidcsi: can't find any faults in it :-/
[09:39:44] <Derick> howdoi: why would there be [ [] ] ?
[09:40:44] <davidcsi> is there any way to do an update with "runCommand"?
[09:40:51] <howdoi> Derick:my bad
[09:40:51] <davidcsi> I might try that...
[09:41:28] <Derick> davidcsi: there is in 2.6
[09:42:01] <howdoi> any nodejs based web interface to admin mongodb?
[09:42:20] <howdoi> http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
[09:42:26] <howdoi> did not help me much
[09:42:57] <Derick> davidcsi: https://github.com/mongodb/specifications/blob/master/source/server_write_commands.rst
[09:43:08] <Derick> howdoi: I've always used rockmongo
[09:43:13] <Derick> but that's PHP
[09:43:23] <Derick> (but without the need for a webserver)
[09:43:51] <rspijker> I had some big issues with rockmongo
[09:43:58] <rspijker> huge VM usage
[09:44:06] <Derick> oh
[09:44:17] <howdoi> Derick:will check that, robomongo does not sync with the DB well
[09:44:57] <howdoi> php_mongo extension installed
[09:44:58] <howdoi> hmm
[09:45:06] <howdoi> something node based would be good
[09:46:05] <rspijker> davidcsi: is there not a specific upsert function in the cpp driver? Or did I dream that?
[09:47:10] <davidcsi> rspijker, I haven't seen it... i've combed the API and haven't seen anything
[09:47:50] <davidcsi> there's this mongo::BulkUpdateBuilder::upsert but doesn't look like what i need
[09:49:05] <remonvv> Derick: Did you give your new CEO a backrub already? Can never start the pampering too early you know.
[09:49:24] <Derick> remonvv: :-) No.
[09:49:31] <remonvv> Not a career man are you
[09:49:33] <Derick> He's in NYC you know...
[09:49:52] <Derick> funny you say, actually having my annual review today ;-)
[09:49:55] <remonvv> Pff, a little ocean in the way and Derick gives up.
[09:50:08] <remonvv> Oh well if you need a reference let me know.
[09:50:17] <Derick> haha, it's ok :D
[09:50:41] <remonvv> No really, I'll tell them sweet little lies.
[09:51:14] <remonvv> "As the CTO of Facebook...and Twitter...and Microsoft..."
[09:53:28] <Derick> :-)
[09:54:27] <sec_> what chars allow in collection name?
[09:56:13] <Derick> http://docs.mongodb.org/manual/reference/limits/#Restriction-on-Collection-Names
[09:56:34] <remonvv> And dare I add; stick to just letters.
[09:57:02] <Derick> and the max length is 128 I think ... can't find that in that document
[09:57:13] <Derick> 123
[09:59:47] <sec_> Derick: thanks.
[10:09:07] <ssarah> guys
[10:09:16] <Zelest> girls?
[10:09:23] <ssarah> and girls <3
[10:09:27] <Zelest> :P
[10:09:36] <ssarah> i just installed mongodb on my machine. how do i access it with say, robomongo?
[10:09:50] <Zelest> no idea what robomongo is..
[10:09:53] <Zelest> fire it up and connect to localhost?
[10:11:31] <ssarah> ah yes, it gives me the default mongo port
[10:11:46] <ssarah> good old, fire up and try it approach
[10:11:57] <Zelest> learning by doing :D
[10:12:06] <ssarah> i just woke up, im lazy
[10:21:18] <sec_> every colletion will include "_id" : ObjectId("53e0ae0b8481dd2a0d74560a"), ?
[10:23:16] <Derick> sec_: every document will, but you can set your own value for _id too
[10:24:22] <Zelest> Derick, my replicaset runs so awesomely good now :D
[10:24:24] <sec_> Derick: for with version <2.4 too?
[10:24:39] <Zelest> sec_, yes.
[10:24:46] <sec_> Zelest: Thanks.
[10:25:16] <Zelest> Derick, I also did some benchmarks with GridFS vs normal docs for that CDN idea of mine.. and its actually faster with GridFS than using normal docs :o
[10:27:49] <davidcsi> ssara you need to create a new connection via SSH
[10:27:54] <davidcsi> ssarah you need to create a new connection via SSH
[10:28:37] <davidcsi> that is, if you're not on the same box...
[10:29:25] <Derick> Zelest: logging helps :)
[10:29:36] <Zelest> Derick, Hehe
[10:42:05] <ssarah> davidcsi: i am on the same box. But what do you mean, a new connection via SSH?
[10:49:22] <davidcsi> ssarah, on the create connection box, you should see a tab called "SSH", you create a tunnel using that, and the actual connection to mongo is via localhost
[10:51:41] <ssarah> ah, i see
[10:51:57] <ssarah> i'll try to find how to do that when i need. thanks bro
[10:52:05] <davidcsi> ssarah, np
[11:41:31] <remonvv> I don't understand why peope on SO keep claiming embedding collections is the preferred way to manage 1:N relationships in MongoDB.
[11:48:38] <davidcsi> anyone know why on Debian, when I have installed "mongodb-org" (2.6) and I try to install "mongodb-dev", apt wants to remove mongodb-org???
[11:49:06] <Derick> that's a good question. Let me see
[11:49:46] <Derick> davidcsi: it's not from the same repository. It wants to install the 2.4.10dev package from the debian distribution
[11:49:51] <Derick> what do you need -dev for?
[11:50:10] <davidcsi> @Derick c++
[11:50:33] <Derick> i don't think we distribute that as a package *yet*
[11:50:38] <Derick> I'll ask, one sec.
[11:53:25] <Derick> asked... will let you know when I get an answer
[11:55:31] <davidcsi> cool, i'll wait for an answer
[11:56:58] <oleg008> hi everybody, short question: do normal indexes contain the full document in memory or just selected fields?
[11:57:08] <Derick> only the index fields
[11:57:19] <oleg008> thanks
[11:57:34] <Derick> or more specifically, only the index field *values*
[11:57:52] <oleg008> why is quering slower when documents are big?
[11:58:09] <Derick> more data has to be read?
[11:58:18] <Derick> perhaps you're not using the right index?
[11:59:12] <oleg008> so when not reading the big documents there should be 0 slowdown and additional memory usage ...
[12:21:05] <oleg008> what happens if I have a very big field in a document, f.e. description, but I when I fetch, I don't select it. Will this perform at same speed like there is no description or there is an overhead?
[12:22:37] <Derick> oleg008: the only difference is that those fields are not send over the wire
[12:22:46] <Derick> a document is always read/brought into memory in full
[12:22:55] <oleg008> aha!
[12:23:02] <oleg008> thats where I spend my ram
[12:23:13] <Derick> hmm
[12:23:20] <Derick> mongodb deals with memory in a different way
[12:23:24] <Derick> it maps *all* data into memory
[12:23:37] <Derick> and leaves it to the OS to fetch things from disk as needed (it's like paging)
[12:24:07] <oleg008> so I would fill my memory anyways independent of how indexes of fetching work
[12:24:39] <Derick> yes
[12:24:44] <oleg008> thanks
[12:25:04] <oleg008> are there best practices how to limit mongod?
[12:25:17] <Derick> there are none, it will use all the memory it needs/can
[12:25:42] <oleg008> my problem is that it uses all my memory
[12:25:52] <oleg008> there is not much free for other processes
[12:25:56] <oleg008> I need to limit it somehow
[12:26:04] <Derick> what other processes?
[12:26:09] <Derick> but, this is by design.
[12:26:23] <oleg008> you mean OS dosn't need any free memory?
[12:26:36] <Derick> the OS of course gets to use memory too :)
[12:26:55] <oleg008> yeah but when mongod eats everything what will OS do?
[12:27:30] <oleg008> I see
[12:27:30] <oleg008> free -m
[12:27:30] <oleg008> total used free shared buffers cached
[12:27:31] <oleg008> Mem: 3752 3709 43 0 30 3394
[12:27:31] <oleg008> -/+ buffers/cache: 284 3468
[12:27:33] <Derick> oleg008: this is called "Swapping" - the OS manages MongoDB's in-memory memory consumption - even though MongoDB has all data *mapped* into memory.
[12:27:48] <Derick> not sure how to explain this better...
[12:28:11] <oleg008> so os will take some memory from mongo when needed
[12:28:35] <Derick> oleg008: yes
[12:28:56] <oleg008> and running other processes on the same machine seems to be bad idea by design of mongo
[12:29:19] <Derick> mongodb is designed to be the only thing on a machine
[12:29:26] <oleg008> I see
[12:29:30] <Derick> (besides the OS of course)
[12:34:31] <Derick> duh, davidcsi timed out as I wanted to give an answer
[12:57:31] <rspijker> Derick: it’s not really swapping is it… Paging, sure. But it shouln’t swap anything?
[12:58:07] <Derick> rspijker: if you run out of memory, sure.
[12:58:19] <Derick> rspijker: used that term as it's often more "windows user" friendly
[13:00:50] <rspijker> well, at some point it would swap, sure… But since mongo just mmaps the data, it will only ever page that out, right? Or are there circumstances where it could write that to swap space as well?
[13:01:44] <Derick> rspijker: it's the same thing, not? :)
[13:01:54] <rspijker> ehm… no?
[13:02:05] <Derick> hang on - i have a meeting
[13:02:08] <Derick> back in 1h :-)
[13:02:13] <rspijker> cool :)
[13:13:00] <rasputnik> mongo replication seems to be a bit of a black box - if i have replica members that don't seem to be catching up, where do i start to debug?
[13:22:11] <sec_> Derick: last document always is system. ?
[13:24:53] <Derick> document?
[13:28:07] <future28> Hey
[13:28:11] <future28> I am in big trouble
[13:28:41] <future28> I've been writing to a database and I have 1 GB leftr on my current partition
[13:28:51] <future28> I need to move it while it
[13:28:52] <future28> s
[13:28:57] <future28> working - Can I do this?
[13:30:46] <future28> Also - why did my 2Gb dataset blow up to 10Gb?
[13:31:30] <oleg008> write db.stats(1024*1024)
[13:31:30] <oleg008> its most probably indexes
[13:31:49] <kali> future28: http://docs.mongodb.org/manual/core/storage/
[13:32:25] <future28> oleg008: What does that do? I'll do it now
[13:32:37] <oleg008> gives you stats
[13:33:43] <future28> oleg008: http://pastebin.com/HgAz6Q0q
[13:34:21] <oleg008> "fileSize" : 4031,
[13:34:27] <oleg008> your db is 4 gb
[13:34:29] <oleg008> on disk
[13:34:32] <oleg008> not 10
[13:34:58] <future28> Right, I had 8 GB free quota and now I have 1
[13:35:03] <future28> So it's using 7
[13:35:16] <oleg008> no its usng 4
[13:35:25] <oleg008> something else is using the rest
[13:35:29] <future28> Ahhhh
[13:35:31] <future28> Uh oh
[13:36:12] <future28> Thanks for that
[13:36:14] <future28> No idea what it is
[13:40:24] <future28> hello again
[13:40:27] <future28> http://pastebin.com/CcFMNUQf
[13:40:43] <future28> Does this imply that I have old stuff lying around? Or does it all belong to the same collection/
[13:42:14] <future28> oleg008: ^
[13:42:33] <oleg008> I only see one db named parkingdata
[13:42:49] <future28> oleg008: That's right - So I guess I am out of luck
[13:42:52] <future28> Thanks for that anyway
[13:49:36] <rspijker> future28: if you have journalling turned on, that can also use 3GB
[13:49:45] <rspijker> mongo has a very large footprint initially…
[13:54:28] <remonvv> Your opinions please; should a typical ODM mapping library reject types that do not have a 1:1 mapping to a BSON type or should it try to do non-lossy casts where needed. For example, in Java should {short a;} throw an error (16 bit signed integers not supported in BSON) or auto-cast it to int(32)
[13:56:17] <kali> remonvv: i think in java it should throw
[13:56:33] <oleg008> your application shoud use types which are ok for the db
[13:56:50] <davidcsi> hello guys,something's wrong with mongodb's downloads:
[13:56:52] <davidcsi> wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.6.4-rc1.tgz --2014-08-06 15:54:39-- https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.6.4-rc1.tgz Resolving fastdl.mongodb.org... 54.230.130.126, 54.230.131.75, 54.230.128.121, ... Connecting to fastdl.mongodb.org|54.230.130.126|:443... connected. OpenSSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure Unable to establish SSL conne
[13:57:10] <kali> remonvv: if you're odming, then you have strong data types, so you'd better go all the way
[13:58:04] <remonvv> kali : That's my opinion but some people here are making the somewhat valid case that where non-lossy auto casting is possible it would help with legacy POJOs and whatnot. I mean there's no actual risk in doing Java short -> BSON int32 and back again.
[13:58:13] <remonvv> Figured i'd poll for opinions here ;)
[13:59:42] <sec_> Derick: _id, system.
[13:59:46] <kali> remonvv: i think i would prefer to refactor the crap out of the pojos...
[14:00:21] <remonvv> kali++
[14:00:33] <remonvv> Ah, cheeser still hasn't made mongobot
[14:00:49] <kali> remonvv: you'll have to include a toggle :)
[14:02:00] <remonvv> I'm violently opposed to making everything configurable. Yay for opinionated libraries ;)
[14:02:05] <remonvv> No choice = no problem
[14:02:32] <rspijker> I’d actually vote in favour of lossless casts…
[14:02:51] <rspijker> it could become an issue if you want to deserialize at some point though…
[14:03:04] <remonvv> Hm, what do you consider the arguments in favor of that?
[14:03:15] <remonvv> I was thinking legacy code could be a factor.
[14:03:26] <rspijker> mostly that
[14:03:34] <remonvv> But other than that and perhaps memory usage it seems hard to justify
[14:03:55] <remonvv> Fair enough
[14:09:30] <reencoded> ALL HAIL THE MON GOD!!!
[14:16:49] <noir_> sup
[14:17:26] <kali> reencoded: you should try to translate "mon god" from french.
[14:21:52] <xissburg> my god?
[14:23:36] <kali> haaa... i thought google translate would fix the typo... "gode" is french slang for "dildo", and it sounds the same as "god"
[14:23:55] <oleg008> sounds reasonable
[14:27:25] <reencoded> lol
[14:30:32] <kali> could not find the dildo-in-the-luggage scene from fight club
[14:30:46] <kali> yes, i'm bored
[14:31:16] <Derick> hmm...
[14:32:41] <rspijker> probably because it was a vibrator kali
[14:33:31] <Derick> not sure whether this topic is fitting for here...
[14:34:31] <kali> rspijker: http://www.imdb.com/title/tt0137523/quotes?item=qt0479168
[14:34:35] <kali> Derick: ok, i'm done.
[14:35:17] <Derick> davidcsi: back again?
[14:35:41] <rspijker> kali: I like the fact that it ends with I don’t own… and then the next quote is: the things you own end up owning you :P
[14:36:15] <richthegeek> i am most recently owned by a donut then
[14:36:38] <richthegeek> if it's possible to be owned by something which no longer exists in a identifiable form?
[14:37:30] <kali> richthegeek: possessed ?
[14:37:40] <richthegeek> ich bin ein berliner
[14:38:03] <richthegeek> although it was iced, not jam-filled
[14:46:58] <davidcsi> @Derick, yep
[14:50:12] <Derick> davidcsi: so... the reason why there is no package from our side is because it's not 1.0.0 yet!
[14:50:20] <Derick> and, in general, we don't make packages ourselves for drivers
[14:50:59] <davidcsi> @Derick: ouch!
[14:51:37] <davidcsi> I guess I will have to downgrade, then...
[14:51:50] <davidcsi> the development is being don on the same box as mongodb
[14:52:48] <reencoded> mongo is such a bad name
[14:53:05] <reencoded> means weak in spanish
[14:53:30] <reencoded> i would prefer mangoDB
[14:53:32] <reencoded> lol
[14:57:22] <Derick> gah
[14:57:27] <Derick> again when I want to answer!
[14:57:28] <Derick> :)
[14:57:50] <reencoded> mangodb
[15:00:17] <reencoded> mongolDB
[15:01:08] <reencoded> hmmm mongo is an ethnic group and language!
[15:01:18] <reencoded> http://en.wikipedia.org/wiki/Mongo_people
[15:01:50] <remonvv> I can tell we're completely on topic again ;)
[15:01:53] <reencoded> http://en.wikipedia.org/wiki/Mongo_language language code: lol haha
[15:03:57] <reencoded> http://en.wikipedia.org/wiki/Mongo_(planet)
[15:04:18] <reencoded> anyway.. yeah... im taking the course on java+mongo development
[15:04:42] <uehtesham90> hello, during aggregation, if i am grouping by multiple fields, and one of the fields does not exist in a document, how will the grouping happen? for e.g. if i group by name, email, nickname and if email does not exist in a document, will it group with email as null?
[15:04:55] <reencoded> and since i frequent freenode im like hanging around here too :p
[15:19:08] <quantum> Hey guys, I have a slightly newbie question. I've been using mysql for ages, and I can view the tables with phpMyAdmin, which is useful for backing them up, and I can also display information on my website by fetching info from the table
[15:20:12] <quantum> I just installed mongodb on my server, how can I do the same things using mongo?
[15:20:22] <quantum> Is there a web UI?
[15:20:55] <rspijker> quantum: there are several
[15:21:08] <rspijker> http://docs.mongodb.org/ecosystem/tools/administration-interfaces/
[15:21:29] <rspijker> I personally prefer robomongo, even though it’s not a web UI
[15:21:37] <rspijker> there are plenty of web UIs on that list though
[15:24:17] <rspijker> uehtesham90: it will show as null, yes
[15:26:48] <uehtesham90> then it will group by null?
[15:28:51] <rspijker> group groups for unique values of the _id key
[15:29:04] <rspijker> so if part of that is null, it will be considered exactly the same as any other value
[15:30:12] <uehtesham90> ok thnx :)
[15:30:42] <rspijker> np
[15:32:37] <rspijker> uehtesham90: that does mean that documents which have that field explicitly set to null will be in the same group as documents that are missing the field
[15:33:50] <uehtesham90> oh really? i didnt know that...but in my collection, there are no fields set to null....is there a way to take of this situation?
[15:34:09] <uehtesham90> maybe i can use the exists fields in my $match opeerator?
[15:35:46] <dukeatcoding> hey guys, how to delete a single file from gridfs when i know the mongoid via mono shell ?
[15:35:52] <dukeatcoding> db.fs.files.delete("53e24812c0fc4674d1d6bc4b");
[15:35:57] <dukeatcoding> doesnt work
[15:36:07] <dukeatcoding> and even new MongoId doesnt work
[15:36:14] <dukeatcoding> 0 TypeError: Property 'delete' of object testfs.fs.files is not a function
[15:38:44] <rspijker> uehtesham90: you can do that, filter out documents where they don;t exist, or filter out documents where it’s explicilty set to null
[15:38:53] <uehtesham90> cool
[15:39:11] <rspijker> dukeatcoding: because of exactly that… delete isn’t a funcitons
[15:39:17] <rspijker> remove is
[15:39:28] <rspijker> but even then, I’m not sure that that’s the way to remove files from gridFS
[15:39:33] <dukeatcoding> sry didn't found it documented
[15:39:40] <dukeatcoding> what would you think is a proper way ?
[15:41:17] <dukeatcoding> ic the chunks are still there... so mongofiles or one of the drive functions should be used
[15:41:29] <dukeatcoding> no other way to properly to both in the shell ?
[15:41:38] <rspijker> dukeatcoding: from what I recall, gridFS is not meant to be used via the shell
[15:41:53] <rspijker> for the simple reason that, you don;t have files in the shell...
[15:42:01] <dukeatcoding> ;)
[15:42:17] <dukeatcoding> i am experimenting right now just wanted to delete that single file ;)
[15:42:30] <rspijker> that being said, the chunks documents all have a files_id
[15:42:35] <rspijker> which should point to the file...
[15:43:00] <rspijker> so.. if you have the _id fo the file
[15:43:23] <rspijker> you could just remove from chunks with files_id:_id_of_the_file
[15:43:31] <rspijker> that should work… I think...
[15:44:14] <dukeatcoding> probably yes
[15:44:46] <dukeatcoding> would dropping the complete collections be bad ... mongo should auto recreate them with the next gridfs insert shouldnt it ?
[15:45:46] <rspijker> for a regular collection, yes
[15:45:50] <rspijker> for grid, no clue
[15:46:08] <Derick> gridfs is nothing special
[15:46:11] <agenteo> hi, I am trying to run a one off query something like “db.myColl.find({_id: {$in: LIST_OF_IDS}})”. I would like list of ids to come from another query… I understand I can use toArray, but the query .find returns key value pairs. Is there a way to return only values?
[15:46:12] <Derick> it's just two collections
[15:46:28] <dukeatcoding> Derick: that was what i was thinking
[15:46:35] <Derick> if you remove them both, then an insert should create them both as well, as it's just two (or more) inserts into multiple collections
[15:46:47] <dukeatcoding> is no "fs" no allowed collection name anymore since fs.files and fs.chunks is there ?
[15:46:52] <Derick> i know that the php driver even creates indexes
[15:47:03] <Derick> dukeatcoding: no, shouldn't matter
[15:47:50] <dukeatcoding> agenteo: never seen only values...
[15:48:54] <agenteo> I am coming from classic SQL, what if you want to find a query based on SELECT * FROM table WHERE id IN (ANOTHER QUERY);
[15:49:04] <Derick> agenteo: you need to do two queries
[15:49:21] <Derick> *or* rethink your database schema
[15:49:22] <agenteo> ok and massage the key value output accordingly I guess
[15:49:43] <agenteo> naa it’s a one off query for some one off data migration
[15:49:53] <ssarah> on the example here, http://www.tutorialspoint.com/mongodb/mongodb_insert_document.htm, how do i go about running that code in mondo? i can't copy paste properly....
[15:49:58] <rspijker> agenteo: you can use aggregation framework to massage the data
[15:50:28] <rspijker> ssarah: why not?
[15:50:34] <rspijker> I can copy it fine
[15:50:43] <agenteo> ok thanks for the tip, but I reckon I’ll just search and replace the _id out in vim and create the array I need
[15:50:45] <agenteo> thanks
[15:50:49] <rspijker> just don’t copy the leading > :P
[15:50:58] <ssarah> i try again
[15:51:35] <ssarah> he's giving me trouble. stuff about unexpected tokens when i paste
[15:51:50] <rspijker> agenteo: that’s usually what I do actually ^^ find().forEach(function(x){print(“\””+x._id+”\”,”);})
[15:52:02] <rspijker> then slap a var ids = [ … ] around it and you;re golden
[15:52:14] <dukeatcoding> ssarah: maybe make a screenshot of your console and share it with us
[15:52:28] <rspijker> ssarah: it’s the quotes
[15:52:39] <rspijker> I didn;t try to actually execute it
[15:52:50] <rspijker> but they’re using single quotes
[15:53:17] <rspijker> which is fine, but they are also in words like doesn’t :)
[15:53:42] <rspijker> actually, doesn’t is probably the only issue there...
[15:56:15] <ssarah> damnti
[15:56:17] <ssarah> *damnit
[15:56:28] <ssarah> bad tutorial
[15:57:30] <rspijker> probably just the software they use for the web content
[15:59:37] <ssarah> oh. yeh, maybe...
[15:59:41] <ssarah> ty
[15:59:44] <dukeatcoding> what is the python driver argument to assign a filename to a put BLOB
[15:59:57] <dukeatcoding> so fs.put(blob,argment=file.txt)
[16:03:18] <dukeatcoding> really easy ;)
[16:03:30] <dukeatcoding> fs.put(blob,filename="test.txt")
[16:20:25] <dukeatcoding> is there any read on improving gridfs throughput performance ?
[16:20:43] <dukeatcoding> i am currently working on a benchmark comparing crate.io performance with mongodb performance
[16:21:06] <dukeatcoding> with 16 python processes putting new files to gridfs i only get a throughput of about 4 files per second
[16:21:18] <dukeatcoding> where crate did 10 /s
[16:23:55] <anildigital> hi people, I am using mac and have installed mongodb using homebrew..
[16:24:06] <anildigital> anyone know how to log the mongo queries
[16:24:13] <anildigital> done by application
[16:27:20] <anildigital> anyone?
[16:29:20] <ernetas> Hmm...
[16:29:36] <ernetas> How do I fix "local.oplog.rs is not empty on the initiating member" without deleting local.*?
[16:29:36] <dukeatcoding> its definetly in the op log ;)
[16:29:42] <dukeatcoding> never tried to log it explicitly#
[16:29:49] <anildigital> dukeatcoding: hmm
[16:30:02] <anildigital> in rails like framework.. we can see actual SQL which ran
[16:30:04] <dukeatcoding> anildigital: what do you need it for ?
[16:30:14] <anildigital> it is very helpful in development of apps
[16:30:25] <anildigital> dukeatcoding: I need to find out which query got fired
[16:30:29] <anildigital> from my app
[16:30:41] <anildigital> and then copy that query and run directly on mongo
[16:31:00] <dukeatcoding> for debugging reasons ?
[16:31:13] <anildigital> dukeatcoding: yep.. obviously
[16:31:14] <dukeatcoding> http://docs.mongodb.org/manual/reference/method/db.currentOp/ shows you the current running ops but you have to "reconstruct" the querry itself
[16:31:40] <dukeatcoding> probably you would log it with the app when an error occurs instead of logging all past querries
[16:33:28] <anildigital> dukeatcoding: I want to log all queries while in development mode of my app
[16:33:43] <anildigital> I am surprised google not turning up with any results
[16:33:49] <anildigital> dukeatcoding: I want to log it to a file
[16:34:19] <dukeatcoding> anildigital: http://stackoverflow.com/questions/15204341/mongodb-logging-all-queries
[16:34:23] <dukeatcoding> 2 approaches found on stackoverflow
[16:34:27] <anildigital> I am not interested in knowing connection information though
[16:35:04] <dukeatcoding> you could use some log filter tools to finally filter out the querries
[16:35:37] <joshua> http://docs.mongodb.org/manual/reference/command/profile/
[16:42:53] <ernetas> Is it safe to remove oplog when server is down?
[16:43:05] <ernetas> (doesn't sound like very safe to me... :/ )
[16:55:04] <anildigital> I have set profilingLevel(2)
[16:55:10] <anildigital> but db.system.profile is not showing anything there
[16:57:43] <anildigital> can't mongodb client log queries?
[17:00:01] <stefandxm> you can let mongodb log queries and execute time etc
[17:00:32] <stefandxm> http://docs.mongodb.org/manual/reference/command/profile/
[17:00:48] <stefandxm> ah sorry you already had
[17:00:49] <anildigital> also it's not logging queries at all
[17:00:50] <anildigital> db.system.profile.find({"op": "query"})
[17:00:56] <anildigital> it's logging something else
[17:01:22] <stefandxm> dumb question, but are you setting it on the correct database?
[17:01:26] <anildigital> https://gist.github.com/anildigital/0900db8810470b469dab
[17:02:54] <anildigital> stefandxm: in which database should I set profiling level
[17:03:04] <anildigital> I have set in the one, I want to see queries for
[17:09:25] <Moonjara> Hi! I don't know if i'm the right place but I'd like some help with basic manipulation in Java. I've a directory with several link as embedded Document. If i do db.directories.findOne({}) in command line i get every link in my directory, but in Java the findOne method does get the embedded documents. Does anyone have any clue to help me? Thanks anyway!
[17:11:12] <anildigital> Is it possible to log mongo queries run by node mongo driver?
[17:11:23] <anildigital> Google is not helping at all
[17:13:22] <kali> anildigital: try mongosniff
[17:13:34] <kali> anildigital: and you probably did something wrong, the profiler DOES work
[17:14:18] <anildigital> not sure it's mongo or node
[17:14:36] <anildigital> I was able to see SQL queries when I used MySQL with rails
[17:14:56] <anildigital> there is not good solution to see queries with node driver or may be mongo lacking here
[17:15:11] <kali> anildigital: mongosnif :P
[17:15:51] <kali> anildigital: and again, the profiler does work.
[17:16:09] <anildigital> kali: okay.. but does profiler shows query as ran?
[17:17:11] <kali> well, it logs them in the server logs
[17:17:19] <kali> and in the collection
[17:18:14] <anildigital> yep.. typo from my side.. when I did use dbname
[17:18:24] <anildigital> but mongo should throw error..
[17:18:31] <anildigital> it uses any random db
[17:18:42] <anildigital> use something should not work unless something exists
[17:19:04] <kali> that's not the mongodb phylosophy
[17:19:12] <anildigital> hmm
[17:19:29] <anildigital> also node.js basic mongo driver doesn't have capability to log queries
[17:20:55] <anildigital> http://mongodb.github.io/node-mongodb-native/
[17:24:02] <tejas-manohar> anyone here use mongoose?
[17:26:54] <anildigital> tejas-manohar: I had used it earlier.. it's ok
[17:27:09] <tejas-manohar> need some help finding out how its configured here
[17:27:12] <tejas-manohar> https://github.com/tejas-manohar/craigs-yo
[17:27:17] <tejas-manohar> i jiust installed it via homebrew
[17:37:11] <ssarah> k, i think i grasped the concepts now. What i've been asked to do is to setup an environment where i have 2 shards (each a replicate set of 2) on the same machin... i think with arbiters and what not
[17:37:19] <ssarah> got any reference i can use for this?
[17:40:26] <ssarah> http://docs.mongodb.org/manual/tutorial/ <- plenty here to read, i guess -_-
[17:45:08] <joshua> You'll need some arbiter processes at least. You can't really have a replica set with 2 members cause if one goes down it won't know what to do
[17:45:52] <ssarah> yeh, i've been told
[17:46:12] <joshua> arbiter doesn't require any data to be stored and you can disable oplog on it
[17:46:18] <ssarah> are arbiters part of mongo or something external?
[17:46:55] <joshua> It runs from another mongod process
[17:47:31] <ssarah> i have to figure out how to launch/control those, i'll read a bit more before i bother you guys some more ;)
[17:48:57] <joshua> http://docs.mongodb.org/manual/tutorial/add-replica-set-arbiter/
[17:52:19] <ssarah> ty <3
[18:02:39] <ssarah> just one question, if'm running two shards (a replica set of two each), on the same machine (for staging purposes), am i gona need more than one mongod?
[18:08:08] <kali> ho yeah
[18:09:05] <kali> you'll have a minimum of 2x2=4 replica (mongod), 1 config server (also mongod) and one mongos
[18:10:35] <ssarah> whats mongos ?
[18:10:52] <kali> ssarah: read more :)
[18:11:41] <ssarah> aight
[18:22:58] <rasputnik> ssarah: don't run 2 member replica sets, pick an odd number
[18:23:26] <rasputnik> sorry just read back
[18:32:52] <ssarah> i have instructions for it to be like this =/
[18:33:04] <ssarah> i can discuss, but i'll first learn how to build the bloody thing
[18:33:41] <staykov> hey i have a feeling this is a dumb question
[18:34:29] <staykov> but is there a way to set a weight on a sort? for example if i have two fields, reviewsCount and testimonialCount
[18:35:18] <staykov> i want all the reviews to be sorted first, and then sort based on testimonial count
[18:35:34] <staykov> it seems like no, i shouldnt even do this on the database
[18:36:09] <kali> wait wait wait
[18:36:14] <kali> give me an example
[18:49:07] <staykov> sorry had a call: i want the results to be sorted by testimonialCount, and then reviewsCount - so the result should be that all the objects that have the same number of testimonials are then sorted by reviews
[19:00:18] <kali> well, that's what i thought... .sort({ reviewsCount: -1, testimonialCount: -1 })
[19:00:36] <kali> to get the most reviewed and testimonialed first
[19:44:27] <CaptainLex> I have a pretty newbish question
[19:45:25] <CaptainLex> Basically, I have a record I'd like to store that contains a bit of atomic information and also references to other structures
[19:45:54] <CaptainLex> (It's the idea of a movie "screening", which includes references to the films being shown, the venue where it's happening, the showtimes, &c)
[19:47:15] <CaptainLex> Is the mongo convention to embed the entire referenced structure in the document (as in, is that how I should design the program), or would I still embed a reference like an ID instead?
[19:47:38] <CaptainLex> And if the latter, does the query language allow me to "piece together" the main structure and suck up all the reference objects?
[19:48:06] <CaptainLex> I don't know much about databases so I wasn't sure what to search for on the documentation
[20:55:21] <tejas-manohar> hey guys i needa drop my db completely because i cant start my node app because of it
[21:35:53] <tejas-manohar> how do i drop/clear db
[21:35:55] <tejas-manohar> from mongo console
[21:43:06] <joshua> tejas-manohar: use database, and then db.dropDatabase(); ( so make real sure you run use on the correct one first)
[21:43:36] <joshua> or you can drop it per collection instead
[22:26:58] <enjawork> I’m trying to figure out optimizing my indexes. if i have an $or query that would hit 2 different indexes, should i make a new index that indexes both fields im hitting in the $or?
[22:27:16] <enjawork> further, if i use a $hint to specify one of those indexes, is that going to ignore the other index?